WorldWideScience

Sample records for instrumental variables methods

  1. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  2. Instrumental variable methods in comparative safety and effectiveness research.

    Science.gov (United States)

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  3. Instrumental variable methods in comparative safety and effectiveness research†

    Science.gov (United States)

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  4. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...

  5. Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables

    Directory of Open Access Journals (Sweden)

    Jéssica Bassi da Silva

    2018-03-01

    Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.

  6. Instrumental variable analysis

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine; Jager, Kitty J.

    2013-01-01

    The main advantage of the randomized controlled trial (RCT) is the random assignment of treatment that prevents selection by prognosis. Nevertheless, only few RCTs can be performed given their high cost and the difficulties in conducting such studies. Therefore, several analytical methods for

  7. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  8. Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization

    Science.gov (United States)

    Staley, James R.

    2017-01-01

    ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167

  9. A review of instrumental variable estimators for Mendelian randomization.

    Science.gov (United States)

    Burgess, Stephen; Small, Dylan S; Thompson, Simon G

    2017-10-01

    Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

  10. Instrumental Variables in the Long Run

    DEFF Research Database (Denmark)

    Casey, Gregory; Klemp, Marc Patrick Brag

    2017-01-01

    In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...

  11. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  12. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    Hoogerheide, L.F.; Kaashoek, J.F.; van Dijk, H.K.

    2007-01-01

    Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior

  13. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2005-01-01

    textabstractLikelihoods and posteriors of instrumental variable regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating such contours

  14. Power calculator for instrumental variable analysis in pharmacoepidemiology.

    Science.gov (United States)

    Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M

    2017-10-01

    Instrumental variable analysis, for example with physicians' prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  15. On the Interpretation of Instrumental Variables in the Presence of Specification Errors

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2015-01-01

    Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.

  16. Instrumental variable estimation of treatment effects for duration outcomes

    NARCIS (Netherlands)

    G.E. Bijwaard (Govert)

    2007-01-01

    textabstractIn this article we propose and implement an instrumental variable estimation procedure to obtain treatment effects on duration outcomes. The method can handle the typical complications that arise with duration data of time-varying treatment and censoring. The treatment effect we

  17. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships.

    Science.gov (United States)

    Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian

    2009-12-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.

  18. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  19. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  20. Econometrics in outcomes research: the use of instrumental variables.

    Science.gov (United States)

    Newhouse, J P; McClellan, M

    1998-01-01

    We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.

  1. Instrumental variable estimation in a survival context

    DEFF Research Database (Denmark)

    Tchetgen Tchetgen, Eric J; Walter, Stefan; Vansteelandt, Stijn

    2015-01-01

    for regression analysis in a survival context, primarily under an additive hazards model, for which we describe 2 simple methods for estimating causal effects. The first method is a straightforward 2-stage regression approach analogous to 2-stage least squares commonly used for IV analysis in linear regression....... The IV approach is very well developed in the context of linear regression and also for certain generalized linear models with a nonlinear link function. However, IV methods are not as well developed for regression analysis with a censored survival outcome. In this article, we develop the IV approach....... In this approach, the fitted value from a first-stage regression of the exposure on the IV is entered in place of the exposure in the second-stage hazard model to recover a valid estimate of the treatment effect of interest. The second method is a so-called control function approach, which entails adding...

  2. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  3. Instrumental methods of analysis, 7th edition

    International Nuclear Information System (INIS)

    Willard, H.H.; Merritt, L.L. Jr.; Dean, J.A.; Settle, F.A. Jr.

    1988-01-01

    The authors have prepared an organized and generally polished product. The book is fashioned to be used as a textbook for an undergraduate instrumental analysis course, a supporting textbook for graduate-level courses, and a general reference work on analytical instrumentation and techniques for professional chemists. Four major areas are emphasized: data collection and processing, spectroscopic instrumentation and methods, liquid and gas chromatographic methods, and electrochemical methods. Analytical instrumentation and methods have been updated, and a thorough citation of pertinent recent literature is included

  4. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  5. Pixe method as microanalytical instrument

    International Nuclear Information System (INIS)

    Tabacniks, M.H.

    1986-02-01

    The PIXE method (Particle Induced X-Ray Emission) as analytical method presenting the evolution, the theoretical fundaments, the detection limit, the optimization for operational conditions is evaluated. The applications of the method to air pollution control and aerosol studies in regions such as Antartic, Amazon and other regions are analysed. (M.C.K.) [pt

  6. The productivity of mental health care: an instrumental variable approach.

    Science.gov (United States)

    Lu, Mingshan

    1999-06-01

    BACKGROUND: Like many other medical technologies and treatments, there is a lack of reliable evidence on treatment effectiveness of mental health care. Increasingly, data from non-experimental settings are being used to study the effect of treatment. However, as in a number of studies using non-experimental data, a simple regression of outcome on treatment shows a puzzling negative and significant impact of mental health care on the improvement of mental health status, even after including a large number of potential control variables. The central problem in interpreting evidence from real-world or non-experimental settings is, therefore, the potential "selection bias" problem in observational data set. In other words, the choice/quantity of mental health care may be correlated with other variables, particularly unobserved variables, that influence outcome and this may lead to a bias in the estimate of the effect of care in conventional models. AIMS OF THE STUDY: This paper addresses the issue of estimating treatment effects using an observational data set. The information in a mental health data set obtained from two waves of data in Puerto Rico is explored. The results using conventional models - in which the potential selection bias is not controlled - and that from instrumental variable (IV) models - which is what was proposed in this study to correct for the contaminated estimation from conventional models - are compared. METHODS: Treatment effectiveness is estimated in a production function framework. Effectiveness is measured as the improvement in mental health status. To control for the potential selection bias problem, IV approaches are employed. The essence of the IV method is to use one or more instruments, which are observable factors that influence treatment but do not directly affect patient outcomes, to isolate the effect of treatment variation that is independent of unobserved patient characteristics. The data used in this study are the first (1992

  7. Causal null hypotheses of sustained treatment strategies: What can be tested with an instrumental variable?

    Science.gov (United States)

    Swanson, Sonja A; Labrecque, Jeremy; Hernán, Miguel A

    2018-05-02

    Sometimes instrumental variable methods are used to test whether a causal effect is null rather than to estimate the magnitude of a causal effect. However, when instrumental variable methods are applied to time-varying exposures, as in many Mendelian randomization studies, it is unclear what causal null hypothesis is tested. Here, we consider different versions of causal null hypotheses for time-varying exposures, show that the instrumental variable conditions alone are insufficient to test some of them, and describe additional assumptions that can be made to test a wider range of causal null hypotheses, including both sharp and average causal null hypotheses. Implications for interpretation and reporting of instrumental variable results are discussed.

  8. Instrumental variables estimation under a structural Cox model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

    2017-01-01

    Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...

  9. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  10. Instrumental variables estimates of peer effects in social networks.

    Science.gov (United States)

    An, Weihua

    2015-03-01

    Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  12. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    Energy Technology Data Exchange (ETDEWEB)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-09-15

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  13. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    International Nuclear Information System (INIS)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-01-01

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  14. The effect of patient satisfaction with pharmacist consultation on medication adherence: an instrumental variable approach

    Directory of Open Access Journals (Sweden)

    Gu NY

    2008-12-01

    Full Text Available There are limited studies on quantifying the impact of patient satisfaction with pharmacist consultation on patient medication adherence. Objectives: The objective of this study is to evaluate the effect of patient satisfaction with pharmacist consultation services on medication adherence in a large managed care organization. Methods: We analyzed data from a patient satisfaction survey of 6,916 patients who had used pharmacist consultation services in Kaiser Permanente Southern California from 1993 to 1996. We compared treating patient satisfaction as exogenous, in a single-equation probit model, with a bivariate probit model where patient satisfaction was treated as endogenous. Different sets of instrumental variables were employed, including measures of patients' emotional well-being and patients' propensity to fill their prescriptions at a non-Kaiser Permanente (KP pharmacy. The Smith-Blundell test was used to test whether patient satisfaction was endogenous. Over-identification tests were used to test the validity of the instrumental variables. The Staiger-Stock weak instrument test was used to evaluate the explanatory power of the instrumental variables. Results: All tests indicated that the instrumental variables method was valid and the instrumental variables used have significant explanatory power. The single equation probit model indicated that the effect of patient satisfaction with pharmacist consultation was significant (p<0.010. However, the bivariate probit models revealed that the marginal effect of pharmacist consultation on medication adherence was significantly greater than the single equation probit. The effect increased from 7% to 30% (p<0.010 after controlling for endogeneity bias. Conclusion: After appropriate adjustment for endogeneity bias, patients satisfied with their pharmacy services are substantially more likely to adhere to their medication. The results have important policy implications given the increasing focus

  15. Evaluating disease management programme effectiveness: an introduction to instrumental variables.

    Science.gov (United States)

    Linden, Ariel; Adams, John L

    2006-04-01

    This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.

  16. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  17. Instrumentation and quantitative methods of evaluation

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1991-01-01

    This report summarizes goals and accomplishments of the research program entitled Instrumentation and Quantitative Methods of Evaluation, during the period January 15, 1989 through July 15, 1991. This program is very closely integrated with the radiopharmaceutical program entitled Quantitative Studies in Radiopharmaceutical Science. Together, they constitute the PROGRAM OF NUCLEAR MEDICINE AND QUANTITATIVE IMAGING RESEARCH within The Franklin McLean Memorial Research Institute (FMI). The program addresses problems involving the basic science and technology that underlie the physical and conceptual tools of radiotracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. 234 refs., 11 figs., 2 tabs

  18. Important variables for parents' postnatal sense of security: evaluating a new Swedish instrument (the PPSS instrument).

    Science.gov (United States)

    Persson, Eva K; Dykes, Anna-Karin

    2009-08-01

    to evaluate dimensions of both parents' postnatal sense of security the first week after childbirth, and to determine associations between the PPSS instrument and different sociodemographic and situational background variables. evaluative, cross-sectional design. 113 mothers and 99 fathers with children live born at term, from five hospitals in southern Sweden. mothers and fathers had similar feelings concerning postnatal sense of security. Of the dimensions in the PPSS instrument, a sense of midwives'/nurses' empowering behaviour, a sense of one's own general well-being and a sense of the mother's well-being as experienced by the father were the most important dimensions for parents' experienced security. A sense of affinity within the family (for both parents) and a sense of manageable breast feeding (for mothers) were not significantly associated with their experienced security. A sense of participation during pregnancy and general anxiety were significantly associated background variables for postnatal sense of security for both parents. For the mothers, parity and a sense that the father was participating during pregnancy were also significantly associated. more focus on parents' participation during pregnancy as well as midwives'/nurses' empowering behaviour during the postnatal period will be beneficial for both parents' postnatal sense of security.

  19. Turbidity threshold sampling: Methods and instrumentation

    Science.gov (United States)

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  20. Analytical chromatography. Methods, instrumentation and applications

    International Nuclear Information System (INIS)

    Yashin, Ya I; Yashin, A Ya

    2006-01-01

    The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.

  1. A selective review of the first 20 years of instrumental variables models in health-services research and medicine.

    Science.gov (United States)

    Cawley, John

    2015-01-01

    The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.

  2. Method of decontaminating radioactive-contaminated instruments

    International Nuclear Information System (INIS)

    Urata, Megumu; Fujii, Masaaki; Kitaguchi, Hiroshi.

    1982-01-01

    Purpose: To enable safety processing of liquid wastes by recovering radioactive metal ions remaining in the electrolytes after the decontamination procedure thereby decreasing the radioactivity. Method: In a decontamination tank containing electrolytes consisting of diluted hydrochloric acid and diluted sulfuric acid, are provided a radioactive contaminated instrument connected to an anode and a collector electrode made of stainless steel connected to a cathode respectively. Upon applying electrical current, the portion of the mother material to be decontaminated is polished electrolytically into metal ions and they are deposited as metal on the collection electrode. After completion of the decontamination, an ultrasonic wave generator is operated to strip and remove the oxide films. Thereafter, the anode is replaced with the carbon electrode and electrical current is supplied continuously, whereby the remaining metal ions are deposited and recovered as the metal on the collection electrode. (Yoshino, Y.)

  3. Method of decontaminating radioactive-contaminated instruments

    Energy Technology Data Exchange (ETDEWEB)

    Urata, M; Fujii, M; Kitaguchi, H

    1982-03-29

    Purpose: To enable safety processing of liquid wastes by recovering radioactive metal ions remaining in the electrolytes after the decontamination procedure thereby decreasing the radioactivity. Method: In a decontamination tank containing electrolytes consisting of diluted hydrochloric acid and diluted sulfuric acid, are provided a radioactive contaminated instrument connected to an anode and a collector electrode made of stainless steel connected to a cathode respectively. Upon applying electrical current, the portion of the mother material to be decontaminated is polished electrolytically into metal ions and they are deposited as metal on the collection electrode. After completion of the decontamination, an ultrasonic wave generator is operated to strip and remove the oxide films. Thereafter, the anode is replaced with the carbon electrode and electrical current is supplied continuously, whereby the remaining metal ions are deposited and recovered as the metal on the collection electrode.

  4. Analytical techniques for instrument design - matrix methods

    International Nuclear Information System (INIS)

    Robinson, R.A.

    1997-01-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from (Δk I ,Δk F to ΔE, ΔQ ampersand 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg's Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question

  5. Bias and Bias Correction in Multi-Site Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard

    2013-01-01

    We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…

  6. 26 CFR 1.1275-5 - Variable rate debt instruments.

    Science.gov (United States)

    2010-04-01

    ... nonpublicly traded property. A debt instrument (other than a tax-exempt obligation) that would otherwise... variations in the cost of newly borrowed funds in the currency in which the debt instrument is denominated... on the yield of actively traded personal property (within the meaning of section 1092(d)(1)). (ii...

  7. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  8. Instrumental neutron activation analysis - a routine method

    International Nuclear Information System (INIS)

    Bruin, M. de.

    1983-01-01

    This thesis describes the way in which at IRI instrumental neutron activation analysis (INAA) has been developed into an automated system for routine analysis. The basis of this work are 20 publications describing the development of INAA since 1968. (Auth.)

  9. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  10. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  11. Modern Instrumental Methods in Forensic Toxicology*

    Science.gov (United States)

    Smith, Michael L.; Vorce, Shawn P.; Holler, Justin M.; Shimomura, Eric; Magluilo, Joe; Jacobs, Aaron J.; Huestis, Marilyn A.

    2009-01-01

    This article reviews modern analytical instrumentation in forensic toxicology for identification and quantification of drugs and toxins in biological fluids and tissues. A brief description of the theory and inherent strengths and limitations of each methodology is included. The focus is on new technologies that address current analytical limitations. A goal of this review is to encourage innovations to improve our technological capabilities and to encourage use of these analytical techniques in forensic toxicology practice. PMID:17579968

  12. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    Science.gov (United States)

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  13. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    Directory of Open Access Journals (Sweden)

    Albertus Girik Allo

    2016-06-01

    Full Text Available Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI, quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB. The first data set, 1985-2013 is used to estimate the role of financial sector on economic growth, focuses on 67 countries. The second data set, 2000-2013 determine the role of institution on financial sector and economic growth by applying 2SLS estimation method. We define institutional variables as set of indicators: Control of Corruption, Political Stability and Absence of Violence, and Voice and Accountability provide declining impact of FDI to economic growth.

  14. Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...

  15. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  16. Improved GLR method to instrument failure detection

    International Nuclear Information System (INIS)

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  17. Invited Commentary: Using Financial Credits as Instrumental Variables for Estimating the Causal Relationship Between Income and Health.

    Science.gov (United States)

    Pega, Frank

    2016-05-01

    Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Radioactive standards and calibration methods for contamination monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-06-01

    Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)

  19. Application of Instrumented Charpy Method in Characterisation of Materials

    OpenAIRE

    Alar, Željko; Mandić, Davor; Dugorepec, Andrija; Sakoman, Matija

    2015-01-01

    Testing of absorbed impact energy according to the Charpy method is carried out to determine the behaviour of a material under the impact load. Instrumented Charpy method allows getting the force displacement curve through the entire test, That curve can be related to force-displacement curve which is obtained by the static tensile test. The purpose of this study was to compare the results of forces obtained by the static tensile test with the forces obtained by the instrumented Charpy method...

  20. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  1. Bias and Bias Correction in Multisite Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S.

    2014-01-01

    We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…

  2. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  3. Methods and instrumentation for positron emission tomography

    International Nuclear Information System (INIS)

    Mandelkern, M.A.; Phelps, M.E.

    1988-01-01

    This paper reports on positron emission tomography (PET), a technique for the noninvasive measurement of local tissue concentrations of injected radioactive tracers. Tracer kinetics techniques can be applied to this information to quantify physiologic function in human tissue. In the tracer method, a pharmaceutical is labeled by a radioactive atom. When introduced into the subject that molecule follows a physiologic pathway. The space- and time-dependent distribution of the radionuclide is obtained via an imaging technique. If the radiopharmaceutical is sufficiently analogous to a natural substrate or other substance of interest, a quantitative image can be translated into a physiologic measurement

  4. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  5. Analytical techniques for instrument design - Matrix methods

    International Nuclear Information System (INIS)

    Robinson, R.A.

    1997-01-01

    The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question

  6. Method to deterministically study photonic nanostructures in different experimental instruments

    NARCIS (Netherlands)

    Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the

  7. Method and apparatus for continuous fluid leak monitoring and detection in analytical instruments and instrument systems

    Science.gov (United States)

    Weitz, Karl K [Pasco, WA; Moore, Ronald J [West Richland, WA

    2010-07-13

    A method and device are disclosed that provide for detection of fluid leaks in analytical instruments and instrument systems. The leak detection device includes a collection tube, a fluid absorbing material, and a circuit that electrically couples to an indicator device. When assembled, the leak detection device detects and monitors for fluid leaks, providing a preselected response in conjunction with the indicator device when contacted by a fluid.

  8. Advanced Measuring (Instrumentation Methods for Nuclear Installations: A Review

    Directory of Open Access Journals (Sweden)

    Wang Qiu-kuan

    2012-01-01

    Full Text Available The nuclear technology has been widely used in the world. The research of measurement in nuclear installations involves many aspects, such as nuclear reactors, nuclear fuel cycle, safety and security, nuclear accident, after action, analysis, and environmental applications. In last decades, many advanced measuring devices and techniques have been widely applied in nuclear installations. This paper mainly introduces the development of the measuring (instrumentation methods for nuclear installations and the applications of these instruments and methods.

  9. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  10. Repairing method of color TV with measuring instrument

    International Nuclear Information System (INIS)

    1996-01-01

    This book concentrates on repairing method of color TV with measuring instrument, which deals with direction and sorts of measuring instrument for service, application and basic technique of an oscilloscope and a synchroscope, constituent of TV and wave reading, everything for test skill for service man, service technique by electronic voltmeter, service technique by sweep generator and maker generator, dot-bar generator and support skill for color TV and color bar generator and application technology of color circuit.

  11. The variability of piezoelectric measurements. Material and measurement method contributions

    International Nuclear Information System (INIS)

    Stewart, M.; Cain, M.

    2002-01-01

    The variability of piezoelectric materials measurements has been investigated in order to separate the contributions from intrinsic instrumental variability, and the contributions from the variability in materials. The work has pinpointed several areas where weaknesses in the measurement methods result in high variability, and also show that good correlation between piezoelectric parameters allow simpler measurement methods to be used. The Berlincourt method has been shown to be unreliable when testing thin discs, however when testing thicker samples there is a good correlation between this and other methods. The high field permittivity and low field permittivity correlate well, so tolerances on low field measurements would predict high field performance. In trying to identify microstructural origins of samples that behave differently to others within a batch, no direct evidence was found to suggest that outliers originate from either differences in microstructure or crystallography. Some of the samples chosen as maximum outliers showed pin-holes, probably from electrical breakdown during poling, even though these defects would ordinarily be detrimental to piezoelectric output. (author)

  12. Instrumentation

    International Nuclear Information System (INIS)

    Muehllehner, G.; Colsher, J.G.

    1982-01-01

    This chapter reviews the parameters which are important to positron-imaging instruments. It summarizes the options which various groups have explored in designing tomographs and the methods which have been developed to overcome some of the limitations inherent in the technique as well as in present instruments. The chapter is not presented as a defense of positron imaging versus single-photon or other imaging modality, neither does it contain a description of various existing instruments, but rather stresses their common properties and problems. Design parameters which are considered are resolution, sampling requirements, sensitivity, methods of eliminating scattered radiation, random coincidences and attenuation. The implementation of these parameters is considered, with special reference to sampling, choice of detector material, detector ring diameter and shielding and variations in point spread function. Quantitation problems discussed are normalization, and attenuation and random corrections. Present developments mentioned are noise reduction through time-of-flight-assisted tomography and signal to noise improvements through high intrinsic resolution. Extensive bibliography. (U.K.)

  13. A Mixed Methods Portrait of Urban Instrumental Music Teaching

    Science.gov (United States)

    Fitzpatrick, Kate R.

    2011-01-01

    The purpose of this mixed methods study was to learn about the ways that instrumental music teachers in Chicago navigated the urban landscape. The design of the study most closely resembles Creswell and Plano Clark's (2007) two-part Triangulation Convergence Mixed Methods Design, with the addition of an initial exploratory focus group component.…

  14. Application of Instrumented Charpy Method in Characterisation of Materials

    Directory of Open Access Journals (Sweden)

    Željko Alar

    2015-07-01

    Full Text Available Testing of absorbed impact energy according to the Charpy method is carried out to determine the behaviour of a material under the impact load. Instrumented Charpy method allows getting the force displacement curve through the entire test, That curve can be related to force-displacement curve which is obtained by the static tensile test. The purpose of this study was to compare the results of forces obtained by the static tensile test with the forces obtained by the instrumented Charpy method. Experimental part of the work contains testing of the mechanical properties of S275J0 steel by the static tensile test and Impact test on instrumented Charpy pendulum.

  15. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    Science.gov (United States)

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  16. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  17. Finite-sample instrumental variables Inference using an Asymptotically Pivotal Statistic

    NARCIS (Netherlands)

    Bekker, P.; Kleibergen, F.R.

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares,

  18. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, Paul A.; Kleibergen, Frank

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Compared to the AR-statistic this K-statistic shows improved asymptotic efficiency in terms of degrees of freedom in overidenti?ed models and yet it shares,

  19. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  20. Performance evaluation methods and instrumentation for mine ventilation fans

    Institute of Scientific and Technical Information of China (English)

    LI Man; WANG Xue-rong

    2009-01-01

    Ventilation fans are one of the most important pieces of equipment in coal mines. Their performance plays an important role in the safety of staff and production. Given the actual requirements of coal mine production, we instituted a research project on the measurement methods of key performance parameters such as wind pressure, amount of ventilation and power. At the end a virtual instrument for mine ventilation fans performance evaluation was developed using a USB interface. The practical perform-ance and analytical results of our experiments show that it is feasible, reliable and effective to use the proposed instrumentation for mine ventilation performance evaluation.

  1. Instrumentation and measurement method for the ATLAS test facility

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Byong Jo; Chu, In Chul; Eu, Dong Jin; Kang, Kyong Ho; Kim, Yeon Sik; Song, Chul Hwa; Baek, Won Pil

    2007-03-15

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS is constructed by thermal-hydraulic safety research division in KAERI. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400 which is a Korean evolution type nuclear reactors. A total 1300 instrumentations is equipped in the ATLAS test facility. In this report, the instrumentation of ATLAS test facility and related measurement methods were introduced.

  2. Essays on Neural Network Sampling Methods and Instrumental Variables

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart)

    2006-01-01

    textabstractDe laatste decennia zijn voor allerlei economische processen complexe modellen afgeleid, zoals voor de groei van het Bruto Binnenlands Product (BBP). In deze modellen zijn in sommige gevallen geavanceerde methoden nodig om kansen te berekenen, bijvoorbeeld de kans op een naderende

  3. Risk assessment of groundwater level variability using variable Kriging methods

    Science.gov (United States)

    Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2015-04-01

    Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the

  4. The contextual effects of social capital on health: a cross-national instrumental variable analysis.

    Science.gov (United States)

    Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro

    2011-12-01

    Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across

  5. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    OpenAIRE

    Albertus Girik Allo

    2016-01-01

    Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI), quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB). The first data set, 1985-2013 is used to estimate the role of fin...

  6. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  7. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.

    2017-01-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...

  8. Hybrid Instruments and the Indirect Credit Method - Does it work?

    OpenAIRE

    Wiedermann-Ondrej, Nadine

    2007-01-01

    This paper analyses the possibility of double non-taxation of hybrid instruments in cross border transactions where the country of the investor has implemented the indirect credit method for mitigation or elimination of double taxation. From an isolated perspective a double non-taxation cannot be obtained because typically no taxes are paid in the foreign country due to the classification as debt and therefore even in the case of a classification as a dividend in the country of the investor n...

  9. Instrumentation

    International Nuclear Information System (INIS)

    Prieur, G.; Nadi, M.; Hedjiedj, A.; Weber, S.

    1995-01-01

    This second chapter on instrumentation gives little general consideration on history and classification of instrumentation, and two specific states of the art. The first one concerns NMR (block diagram of instrumentation chain with details on the magnets, gradients, probes, reception unit). The first one concerns precision instrumentation (optical fiber gyro-meter and scanning electron microscope), and its data processing tools (programmability, VXI standard and its history). The chapter ends with future trends on smart sensors and Field Emission Displays. (D.L.). Refs., figs

  10. Collective variables method in relativistic theory

    International Nuclear Information System (INIS)

    Shurgaya, A.V.

    1983-01-01

    Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance

  11. Variable aperture-based ptychographical iterative engine method.

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2000-01-01

    SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised

  13. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2001-04-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor.

  14. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2001-01-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor

  15. DATA COLLECTION METHOD FOR PEDESTRIAN MOVEMENT VARIABLES

    Directory of Open Access Journals (Sweden)

    Hajime Inamura

    2000-01-01

    Full Text Available The need of tools for design and evaluation of pedestrian areas, subways stations, entrance hall, shopping mall, escape routes, stadium etc lead to the necessity of a pedestrian model. One approach pedestrian model is Microscopic Pedestrian Simulation Model. To be able to develop and calibrate a microscopic pedestrian simulation model, a number of variables need to be considered. As the first step of model development, some data was collected using video and the coordinate of the head path through image processing were also taken. Several numbers of variables can be gathered to describe the behavior of pedestrian from a different point of view. This paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians and its variables

  16. The Effect of Birth Weight on Academic Performance: Instrumental Variable Analysis.

    Science.gov (United States)

    Lin, Shi Lin; Leung, Gabriel Matthew; Schooling, C Mary

    2017-05-01

    Observationally, lower birth weight is usually associated with poorer academic performance; whether this association is causal or the result of confounding is unknown. To investigate this question, we obtained an effect estimate, which can have a causal interpretation under specific assumptions, of birth weight on educational attainment using instrumental variable analysis based on single nucleotide polymorphisms determining birth weight combined with results from the Social Science Genetic Association Consortium study of 126,559 Caucasians. We similarly obtained an estimate of the effect of birth weight on academic performance in 4,067 adolescents from Hong Kong's (Chinese) Children of 1997 birth cohort (1997-2016), using twin status as an instrumental variable. Birth weight was not associated with years of schooling (per 100-g increase in birth weight, -0.006 years, 95% confidence interval (CI): -0.02, 0.01) or college completion (odds ratio = 1.00, 95% CI: 0.96, 1.03). Birth weight was also unrelated to academic performance in adolescents (per 100-g increase in birth weight, -0.004 grade, 95% CI: -0.04, 0.04) using instrumental variable analysis, although conventional regression gave a small positive association (0.02 higher grade, 95% CI: 0.01, 0.03). Observed associations of birth weight with academic performance may not be causal, suggesting that interventions should focus on the contextual factors generating this correlation. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2002-01-01

    SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described

  18. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2002-04-01

    SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described.

  19. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2000-07-01

    SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised.

  20. Climate Informed Economic Instruments to Enhance Urban Water Supply Resilience to Hydroclimatological Variability and Change

    Science.gov (United States)

    Brown, C.; Carriquiry, M.; Souza Filho, F. A.

    2006-12-01

    Hydroclimatological variability presents acute challenges to urban water supply providers. The impact is often most severe in developing nations where hydrologic and climate variability can be very high, water demand is unmet and increasing, and the financial resources to mitigate the social effects of that variability are limited. Furthermore, existing urban water systems face a reduced solution space, constrained by competing and conflicting interests, such as irrigation demand, recreation and hydropower production, and new (relative to system design) demands to satisfy environmental flow requirements. These constraints magnify the impacts of hydroclimatic variability and increase the vulnerability of urban areas to climate change. The high economic and social costs of structural responses to hydrologic variability, such as groundwater utilization and the construction or expansion of dams, create a need for innovative alternatives. Advances in hydrologic and climate forecasting, and the increasing sophistication and acceptance of incentive-based mechanisms for achieving economically efficient water allocation offer potential for improving the resilience of existing water systems to the challenge of variable supply. This presentation will explore the performance of a system of climate informed economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on water-sensitive stakeholders. The system is comprised of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Contract and insurance parameters are linked to forecasts and the evolution of seasonal precipitation and streamflow and designed for financial and political viability. A simulation of system performance is presented based on ongoing work in Metro Manila, Philippines. The

  1. Extending the frontiers of mass spectrometric instrumentation and methods

    Energy Technology Data Exchange (ETDEWEB)

    Schieffer, Gregg Martin [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    The focus of this dissertation is two-fold: developing novel analysis methods using mass spectrometry and the implementation and characterization of a novel ion mobility mass spectrometry instrumentation. The novel mass spectrometry combines ion trap for ion/ion reactions coupled to an ion mobility cell. The long term goal of this instrumentation is to use ion/ion reactions to probe the structure of gas phase biomolecule ions. The three ion source - ion trap - ion mobility - qTOF mass spectrometer (IT - IM - TOF MS) instrument is described. The analysis of the degradation products in coal (Chapter 2) and the imaging plant metabolites (Appendix III) fall under the methods development category. These projects use existing commercial instrumentation (JEOL AccuTOF MS and Thermo Finnigan LCQ IT, respectively) for the mass analysis of the degraded coal products and the plant metabolites, respectively. The coal degradation paper discusses the use of the DART ion source for fast and easy sample analysis. The sample preparation consisted of a simple 50 fold dilution of the soluble coal products in water and placing the liquid in front of the heated gas stream. This is the first time the DART ion source has been used for analysis of coal. Steven Raders under the guidance of John Verkade came up with the coal degradation projects. Raders performed the coal degradation reactions, worked up the products, and sent them to me. Gregg Schieffer developed the method and wrote the paper demonstrating the use of the DART ion source for the fast and easy sample analysis. The plant metabolite imaging project extends the use of colloidal graphite as a sample coating for atmospheric pressure LDI. DC Perdian and I closely worked together to make this project work. Perdian focused on building the LDI setup whereas Schieffer focused on the MSn analysis of the metabolites. Both Perdian and I took the data featured in the paper. Perdian was the primary writer of the paper and used it as a

  2. New methods of magnet-based instrumentation for NOTES.

    Science.gov (United States)

    Magdeburg, Richard; Hauth, Daniel; Kaehler, Georg

    2013-12-01

    Laparoscopic surgery has displaced open surgery as the standard of care for many clinical conditions. NOTES has been described as the next surgical frontier with the objective of incision-free abdominal surgery. The principal challenge of NOTES procedures is the loss of triangulation and instrument rigidity, which is one of the fundamental concepts of laparoscopic surgery. To overcome these problems necessitates the development of new instrumentation. material and methods: We aimed to assess the use of a very simple combination of internal and external magnets that might allow the vigorous multiaxial traction/counter-traction required in NOTES procedures. The magnet retraction system consisted of an external magnetic assembly and either small internal magnets attached by endoscopic clips to the designated tissue (magnet-clip-approach) or an endoscopic grasping forceps in a magnetic deflector roll (magnet-trocar-approach). We compared both methods regarding precision, time and efficacy by performing transgastric partial uterus resections with better results for the magnet-trocar-approach. This proof-of-principle animal study showed that the combination of external and internal magnets generates sufficient coupling forces at clinically relevant abdominal wall thicknesses, making them suitable for use and evaluation in NOTES procedures, and provides the vigorous multiaxial traction/counter-traction required by the lack of additional abdominal trocars.

  3. Impact of instrumental response on observed ozonesonde profiles: First-order estimates and implications for measures of variability

    Science.gov (United States)

    Clifton, G. T.; Merrill, J. T.; Johnson, B. J.; Oltmans, S. J.

    2009-12-01

    Ozonesondes provide information on the ozone distribution up to the middle stratosphere. Ozone profiles often feature layers, with vertically discrete maxima and minima in the mixing ratio. Layers are especially common in the UT/LS regions and originate from wave breaking, shearing and other transport processes. ECC sondes, however, have a moderate response time to significant changes in ozone. A sonde can ascend over 350 meters before it responds fully to a step change in ozone. This results in an overestimate of the altitude assigned to layers and an underestimate of the underlying variability in the amount of ozone. An estimate of the response time is made for each instrument during the preparation for flight, but the profile data are typically not processed to account for the response. Here we present a method of categorizing the response time of ECC instruments and an analysis of a low-pass filter approximation to the effects on profile data. Exponential functions were fit to the step-up and step-down responses using laboratory data. The resulting response time estimates were consistent with results from standard procedures, with the up-step response time exceeding the down-step value somewhat. A single-pole Butterworth filter that approximates the instrumental effect was used with synthetic layered profiles to make first-order estimates of the impact of the finite response time. Using a layer analysis program previously applied to observed profiles we find that instrumental effects can attenuate ozone variability by 20-45% in individual layers, but that the vertical offset in layer altitudes is moderate, up to about 150 meters. We will present results obtained using this approach, coupled with data on the distribution of layer characteristics found using the layer analysis procedure on profiles from Narragansett, Rhode Island and other US sites to quantify the impact on overall variability estimates given ambient distributions of layer occurrence, thickness

  4. Methods and instrumentation for quantitative microchip capillary electrophoresis

    NARCIS (Netherlands)

    Revermann, T.

    2007-01-01

    The development of novel instrumentation and analytical methodology for quantitative microchip capillary electrophoresis (MCE) is described in this thesis. Demanding only small quantities of reagents and samples, microfluidic instrumentation is highly advantageous. Fast separations at high voltages

  5. Instruments

    International Nuclear Information System (INIS)

    Buehrer, W.

    1996-01-01

    The present paper mediates a basic knowledge of the most commonly used experimental techniques. We discuss the principles and concepts necessary to understand what one is doing if one performs an experiment on a certain instrument. (author) 29 figs., 1 tab., refs

  6. Variable threshold method for ECG R-peak detection.

    Science.gov (United States)

    Kew, Hsein-Ping; Jeong, Do-Un

    2011-10-01

    In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.

  7. Emittance measurements by variable quadrupole method

    International Nuclear Information System (INIS)

    Toprek, D.

    2005-01-01

    The beam emittance is a measure of both the beam size and beam divergence, we cannot directly measure its value. If the beam size is measured at different locations or under different focusing conditions such that different parts of the phase space ellipse will be probed by the beam size monitor, the beam emittance can be determined. An emittance measurement can be performed by different methods. Here we will consider the varying quadrupole setting method.

  8. Social interactions and college enrollment: A combined school fixed effects/instrumental variables approach.

    Science.gov (United States)

    Fletcher, Jason M

    2015-07-01

    This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Method to deterministically study photonic nanostructures in different experimental instruments.

    Science.gov (United States)

    Husken, B H; Woldering, L A; Blum, C; Vos, W L

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.

  10. Instrumental performance of an etude after three methods of practice.

    Science.gov (United States)

    Vanden Ark, S

    1997-12-01

    For 80 fifth-grade students three practice conditions (mental, mental with physical simulation, and physical with singing) produced significant mean differences in instrumental performance of an etude. No significant differences were found for traditional, physical practice.

  11. Intercomparison of two comparative reactivity method instruments inf the Mediterranean basin during summer 2013

    Science.gov (United States)

    Zannoni, N.; Dusanter, S.; Gros, V.; Sarda Esteve, R.; Michoud, V.; Sinha, V.; Locoge, N.; Bonsang, B.

    2015-09-01

    The hydroxyl radical (OH) plays a key role in the atmosphere, as it initiates most of the oxidation processes of volatile organic compounds (VOCs), and can ultimately lead to the formation of ozone and secondary organic aerosols (SOAs). There are still uncertainties associated with the OH budget assessed using current models of atmospheric chemistry and direct measurements of OH sources and sinks have proved to be valuable tools to improve our understanding of the OH chemistry. The total first order loss rate of OH, or total OH reactivity, can be directly measured using three different methods, such as the following: total OH loss rate measurement, laser-induced pump and probe technique and comparative reactivity method. Observations of total OH reactivity are usually coupled to individual measurements of reactive compounds in the gas phase, which are used to calculate the OH reactivity. Studies using the three methods have highlighted that a significant fraction of OH reactivity is often not explained by individually measured reactive compounds and could be associated to unmeasured or unknown chemical species. Therefore accurate and reproducible measurements of OH reactivity are required. The comparative reactivity method (CRM) has demonstrated to be an advantageous technique with an extensive range of applications, and for this reason it has been adopted by several research groups since its development. However, this method also requires careful corrections to derive ambient OH reactivity. Herein we present an intercomparison exercise of two CRM instruments, CRM-LSCE (Laboratoire des Sciences du Climat et de l'Environnement) and CRM-MD (Mines Douai), conducted during July 2013 at the Mediterranean site of Ersa, Cape Corsica, France. The intercomparison exercise included tests to assess the corrections needed by the two instruments to process the raw data sets as well as OH reactivity observations. The observation was divided in three parts: 2 days of plant

  12. Fasting Glucose and the Risk of Depressive Symptoms: Instrumental-Variable Regression in the Cardiovascular Risk in Young Finns Study.

    Science.gov (United States)

    Wesołowska, Karolina; Elovainio, Marko; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Pitkänen, Niina; Lipsanen, Jari; Tukiainen, Janne; Lyytikäinen, Leo-Pekka; Lehtimäki, Terho; Juonala, Markus; Raitakari, Olli; Keltikangas-Järvinen, Liisa

    2017-12-01

    Type 2 diabetes (T2D) has been associated with depressive symptoms, but the causal direction of this association and the underlying mechanisms, such as increased glucose levels, remain unclear. We used instrumental-variable regression with a genetic instrument (Mendelian randomization) to examine a causal role of increased glucose concentrations in the development of depressive symptoms. Data were from the population-based Cardiovascular Risk in Young Finns Study (n = 1217). Depressive symptoms were assessed in 2012 using a modified Beck Depression Inventory (BDI-I). Fasting glucose was measured concurrently with depressive symptoms. A genetic risk score for fasting glucose (with 35 single nucleotide polymorphisms) was used as an instrumental variable for glucose. Glucose was not associated with depressive symptoms in the standard linear regression (B = -0.04, 95% CI [-0.12, 0.04], p = .34), but the instrumental-variable regression showed an inverse association between glucose and depressive symptoms (B = -0.43, 95% CI [-0.79, -0.07], p = .020). The difference between the estimates of standard linear regression and instrumental-variable regression was significant (p = .026) CONCLUSION: Our results suggest that the association between T2D and depressive symptoms is unlikely to be caused by increased glucose concentrations. It seems possible that T2D might be linked to depressive symptoms due to low glucose levels.

  13. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  14. Spectrometric methods used in the calibration of radiodiagnostic measuring instruments

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, W [Rijksuniversiteit Utrecht (Netherlands)

    1995-12-01

    Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.

  15. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  16. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    Energy Technology Data Exchange (ETDEWEB)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Harrington, K.; Rostem, K.; Watts, D. J. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Chuss, D. T. [Department of Physics, Villanova University, 800 E Lancaster, Villanova, PA 19085 (United States); Wollack, E. J.; Fixsen, D. J.; Moseley, S. H.; Switzer, E. R., E-mail: Nathan.J.Miller@nasa.gov [Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.

  17. Authentication method for safeguards instruments securing data transmission

    International Nuclear Information System (INIS)

    Richter, B.; Stein, G.; Neumann, G.; Gartner, K.J.

    1986-01-01

    Because of the worldwide increase in nuclear fuel cycle activities, the need arises to reduce inspection effort by increasing the inspection efficiency per facility. Therefore, more and more advanced safeguards instruments will be designed for automatic operation. In addition, sensoring and recording devices may be well separated from each other within the facility, while the data transmission medium is a cable. The basic problem is the authenticity of the transmitted information. It has to be ensured that no potential adversary is able to falsify the transmitted safeguards data, i.e. the data transmission is secured. At present, predominantly C/S-devices are designed for automatic and remote interrogation. Also in other areas of safeguards instrumentation authentication will become a major issue, in particular, where the facility operator may offer his process instrumentation to be used also for safeguards purposes. In this paper possibilities to solve the problem of authentication are analysed

  18. College quality and hourly wages: evidence from the self-revelation model, sibling models and instrumental variables.

    Science.gov (United States)

    Borgen, Nicolai T

    2014-11-01

    This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Two methods for studying the X-ray variability

    NARCIS (Netherlands)

    Yan, Shu-Ping; Ji, Li; Méndez, Mariano; Wang, Na; Liu, Siming; Li, Xiang-Dong

    2016-01-01

    The X-ray aperiodic variability and quasi-periodic oscillation (QPO) are the important tools to study the structure of the accretion flow of X-ray binaries. However, the origin of the complex X-ray variability from X-ray binaries remains yet unsolved. We proposed two methods for studying the X-ray

  20. The functional variable method for solving the fractional Korteweg ...

    Indian Academy of Sciences (India)

    The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.

  1. Extensions of von Neumann's method for generating random variables

    International Nuclear Information System (INIS)

    Monahan, J.F.

    1979-01-01

    Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions

  2. Variable identification in group method of data handling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)

    2011-07-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  3. Variable identification in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2011-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  4. New developments in radiation protection instrumentation via active electronic methods

    International Nuclear Information System (INIS)

    Umbarger, C.J.

    1981-01-01

    New developments in electronics and radiation detectors are improving on real-time data acquisition of radiation exposure and contamination conditions. Recent developments in low power circuit designs, hybrid and integrated circuits, and microcomputers have all contributed to smaller and lighter radiation detection instruments that are, at the same time, more sensitive and provide more information (e.g., radioisotope identification) than previous devices. New developments in radiation detectors, such as cadmium telluride, gas scintillation proportional counters, and imaging counters (both charged particle and photon) promise higher sensitivities and expanded uses over present instruments. These developments are being applied in such areas as health physics, waste management, environmental monitoring, in vivo measurements, and nuclear safeguards

  5. Optical Methods and Instrumentation in Brain Imaging and Therapy

    CERN Document Server

    2013-01-01

    This book provides a comprehensive up-to-date review of optical approaches used in brain imaging and therapy. It covers a variety of imaging techniques including diffuse optical imaging, laser speckle imaging, photoacoustic imaging and optical coherence tomography. A number of laser-based therapeutic approaches are reviewed, including photodynamic therapy, fluorescence guided resection and photothermal therapy. Fundamental principles and instrumentation are discussed for each imaging and therapeutic technique. Represents the first publication dedicated solely to optical diagnostics and therapeutics in the brain Provides a comprehensive review of the principles of each imaging/therapeutic modality Reviews the latest advances in instrumentation for optical diagnostics in the brain Discusses new optical-based therapeutic approaches for brain diseases

  6. Job demands and job strain as risk factors for employee wellbeing in elderly care: an instrumental-variables analysis.

    Science.gov (United States)

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Jokela, Markus; Aalto, Anna-Mari; Pekkarinen, Laura; Noro, Anja; Finne-Soveri, Harriet; Kivimäki, Mika; Sinervo, Timo

    2015-02-01

    The association between psychosocial work environment and employee wellbeing has repeatedly been shown. However, as environmental evaluations have typically been self-reported, the observed associations may be attributable to reporting bias. Applying instrumental-variable regression, we used staffing level (the ratio of staff to residents) as an unconfounded instrument for self-reported job demands and job strain to predict various indicators of wellbeing (perceived stress, psychological distress and sleeping problems) among 1525 registered nurses, practical nurses and nursing assistants working in elderly care wards. In ordinary regression, higher self-reported job demands and job strain were associated with increased risk of perceived stress, psychological distress and sleeping problems. The effect estimates for the associations of these psychosocial factors with perceived stress and psychological distress were greater, but less precisely estimated, in an instrumental-variables analysis which took into account only the variation in self-reported job demands and job strain that was explained by staffing level. No association between psychosocial factors and sleeping problems was observed with the instrumental-variable analysis. These results support a causal interpretation of high self-reported job demands and job strain being risk factors for employee wellbeing. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  7. Guide on Economic Instruments & Non-market Valuation Methods

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Bartczak, Anna; Czajkowski, Mikołaj

    The aim of this guidance document is to provide forest practitioners, decision makers and forest owners insights into the various economic instruments available to enhance the non-market ecosystem provision of forests such as a high quality biodiversity; enhanced carbon sequestration; improved...... with ecosystem degradation and iii) by recognising the substantial economic and welfare benefits of better management of ecosystems in forests. Ecosystem services contribute to economic welfare in two ways: • by contributing to the generation of income and wellbeing; and • by preventing damages that inflict...... initiatives it is therefore essential to consider trade offs and synergies between the complex interplay between ecosystem goods and services within an ecosystem,...

  8. The functional variable method for finding exact solutions of some ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...

  9. Health insurance and the demand for medical care: Instrumental variable estimates using health insurer claims data.

    Science.gov (United States)

    Dunn, Abe

    2016-07-01

    This paper takes a different approach to estimating demand for medical care that uses the negotiated prices between insurers and providers as an instrument. The instrument is viewed as a textbook "cost shifting" instrument that impacts plan offerings, but is unobserved by consumers. The paper finds a price elasticity of demand of around -0.20, matching the elasticity found in the RAND Health Insurance Experiment. The paper also studies within-market variation in demand for prescription drugs and other medical care services and obtains comparable price elasticity estimates. Published by Elsevier B.V.

  10. A Geometrical Method for Sound-Hole Size and Location Enhancement in Lute Family Musical Instruments: The Golden Method

    Directory of Open Access Journals (Sweden)

    Soheil Jafari

    2017-11-01

    Full Text Available This paper presents a new analytical approach, the Golden Method, to enhance sound-hole size and location in musical instruments of the lute family in order to obtain better sound damping characteristics based on the concept of the golden ratio and the instrument geometry. The main objective of the paper is to increase the capability of lute family musical instruments in keeping a note for a certain time at a certain level to enhance the instruments’ orchestral characteristics. For this purpose, a geometry-based analytical method, the Golden Method is first described in detail in an itemized feature. A new musical instrument is then developed and tested to confirm the ability of the Golden Method in optimizing the acoustical characteristics of musical instruments from a damping point of view by designing the modified sound-hole. Finally, the new-developed instrument is tested, and the obtained results are compared with those of two well-known instruments to confirm the effectiveness of the proposed method. The experimental results show that the suggested method is able to increase the sound damping time by at least 2.4% without affecting the frequency response function and other acoustic characteristics of the instrument. This methodology could be used as the first step in future studies on design, optimization and evaluation of musical instruments of the lute family (e.g., lute, oud, barbat, mandolin, setar, and etc..

  11. Instrumental and statistical methods for the comparison of class evidence

    Science.gov (United States)

    Liszewski, Elisa Anne

    Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.

  12. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    International Nuclear Information System (INIS)

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  13. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  14. Decoupling Solar Variability and Instrument Trends Using the Multiple Same-Irradiance-Level (MuSIL) Analysis Technique

    Science.gov (United States)

    Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin

    2018-05-01

    The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.

  15. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  16. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  17. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  18. Does the Early Bird Catch the Worm? Instrumental Variable Estimates of Educational Effects of Age of School Entry in Germany

    OpenAIRE

    Puhani, Patrick A.; Weber, Andrea M.

    2006-01-01

    We estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. We find robust and significant positive effects on educational outcomes for pupils who enter school at seven instead of six years of age: Test scores at the end of primary school incr...

  19. Improvement of the variable storage coefficient method with water surface gradient as a variable

    Science.gov (United States)

    The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...

  20. 8 years of Solar Spectral Irradiance Variability Observed from the ISS with the SOLAR/SOLSPEC Instrument

    Science.gov (United States)

    Damé, Luc; Bolsée, David; Meftah, Mustapha; Irbah, Abdenour; Hauchecorne, Alain; Bekki, Slimane; Pereira, Nuno; Cessateur, Marchand; Gäel; , Marion; et al.

    2016-10-01

    Accurate measurements of Solar Spectral Irradiance (SSI) are of primary importance for a better understanding of solar physics and of the impact of solar variability on climate (via Earth's atmospheric photochemistry). The acquisition of a top of atmosphere reference solar spectrum and of its temporal and spectral variability during the unusual solar cycle 24 is of prime interest for these studies. These measurements are performed since April 2008 with the SOLSPEC spectro-radiometer from the far ultraviolet to the infrared (166 nm to 3088 nm). This instrument, developed under a fruitful LATMOS/BIRA-IASB collaboration, is part of the Solar Monitoring Observatory (SOLAR) payload, externally mounted on the Columbus module of the International Space Station (ISS). The SOLAR mission, with its actual 8 years duration, will cover almost the entire solar cycle 24. We present here the in-flight operations and performances of the SOLSPEC instrument, including the engineering corrections, calibrations and improved know-how procedure for aging corrections. Accordingly, a SSI reference spectrum from the UV to the NIR will be presented, together with its variability in the UV, as measured by SOLAR/SOLSPEC for 8 years. Uncertainties on these measurements and comparisons with other instruments will be briefly discussed.

  1. Instrumental methods for analysis of some elements in flour

    International Nuclear Information System (INIS)

    Zagrodzki, P.; Dutkiewicz, E.M.; Malec, P.; Krosniak, M.; Knap, W.

    1993-10-01

    For ten various brands of flour contents of chosen (heavy) elements were determined by means of ICP, GF-AAS, PIXE and ASV/CSV methods. General performance of participating laboratories as well as pros and cons of different analytical methods were compared and discussed. (author). 6 refs, 6 figs, 7 tabs

  2. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  3. Recursive form of general limited memory variable metric methods

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2013-01-01

    Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365

  4. Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument

    Directory of Open Access Journals (Sweden)

    Buckley Norman

    2010-10-01

    Full Text Available Abstract Background The Internet is used increasingly by providers as a tool for disseminating pain-related health information and by patients as a resource about health conditions and treatment options. However, health information on the Internet remains unregulated and varies in quality, accuracy and readability. The objective of this study was to determine the quality of pain websites, and explain variability in quality and readability between pain websites. Methods Five key terms (pain, chronic pain, back pain, arthritis, and fibromyalgia were entered into the Google, Yahoo and MSN search engines. Websites were assessed using the DISCERN instrument as a quality index. Grade level readability ratings were assessed using the Flesch-Kincaid Readability Algorithm. Univariate (using alpha = 0.20 and multivariable regression (using alpha = 0.05 analyses were used to explain the variability in DISCERN scores and grade level readability using potential for commercial gain, health related seals of approval, language(s and multimedia features as independent variables. Results A total of 300 websites were assessed, 21 excluded in accordance with the exclusion criteria and 110 duplicate websites, leaving 161 unique sites. About 6.8% (11/161 websites of the websites offered patients' commercial products for their pain condition, 36.0% (58/161 websites had a health related seal of approval, 75.8% (122/161 websites presented information in English only and 40.4% (65/161 websites offered an interactive multimedia experience. In assessing the quality of the unique websites, of a maximum score of 80, the overall average DISCERN Score was 55.9 (13.6 and readability (grade level of 10.9 (3.9. The multivariable regressions demonstrated that website seals of approval (P = 0.015 and potential for commercial gain (P = 0.189 were contributing factors to higher DISCERN scores, while seals of approval (P = 0.168 and interactive multimedia (P = 0.244 contributed to

  5. Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.

    Science.gov (United States)

    Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert

    2016-08-01

    We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.

  6. A Workshop on Methods for Neutron Scattering Instrument Design. Introduction and Summary

    International Nuclear Information System (INIS)

    Hjelm, Rex P.

    1996-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop on ''Methods for Neutron Scattering Instrument Design'' September 23-25 at the E.O. Lawrence Berkeley National Laboratory. These proceedings are a collection of a portion of the invited and contributed presentations

  7. Instrumentation and quantitative methods of evaluation. Progress report, January 15-September 14, 1986

    International Nuclear Information System (INIS)

    Beck, R.N.

    1986-09-01

    This document reports progress under grant entitled ''Instrumentation and Quantitative Methods of Evaluation.'' Individual reports are presented on projects entitled the physical aspects of radionuclide imaging, image reconstruction and quantitative evaluation, PET-related instrumentation for improved quantitation, improvements in the FMI cyclotron for increased utilization, and methodology for quantitative evaluation of diagnostic performance

  8. THE REGULATION OF MONEY CIRCULATION ON THE BASIS OF USING METHODS AND INSTRUMENTS OF MONETARY POLICY

    OpenAIRE

    S. Mishchenko; S. Naumenkova

    2013-01-01

    In the article it was researched the instruments and mechanism of safeguarding stability of money market on the basis of implementing the optimal monetary policy regime. It was determined the main directions of appliance the monetary policy methods and instruments to guiding money market stability and it was also investigated the influence of transmission mechanism on providing the soundness of money circulations.

  9. Ultrasonic partial discharge monitoring method on instrument transformers

    Directory of Open Access Journals (Sweden)

    Kartalović Nenad

    2012-01-01

    Full Text Available Sonic and ultrasonic partial discharge monitoring have been applied since the early days of these phenomena monitoring. Modern measurement and partial discharge acoustic (ultrasonic and sonic monitoring method has been rapidly evolving as a result of new electronic component design, information technology and updated software solutions as well as the development of knowledge in the partial discharge diagnosis. Electrical discharges in the insulation system generate voltage-current pulses in the network and ultrasonic waves that propagate through the insulation system and structure. Amplitude-phase-frequency analysis of these signals reveals information about the intensity, type and location of partial discharges. The paper discusses the possibility of ultrasonic method selectivity improvement and the increase of diagnosis reliability in the field. Measurements were performed in the laboratory and in the field while a number of transformers were analysed for dissolved gases in the oil. A comparative review of methods for the partial discharge detection is also presented in this paper.

  10. Association of Body Mass Index with Depression, Anxiety and Suicide-An Instrumental Variable Analysis of the HUNT Study.

    Directory of Open Access Journals (Sweden)

    Johan Håkon Bjørngaard

    Full Text Available While high body mass index is associated with an increased risk of depression and anxiety, cumulative evidence indicates that it is a protective factor for suicide. The associations from conventional observational studies of body mass index with mental health outcomes are likely to be influenced by reverse causality or confounding by ill-health. In the present study, we investigated the associations between offspring body mass index and parental anxiety, depression and suicide in order to avoid problems with reverse causality and confounding by ill-health.We used data from 32,457 mother-offspring and 27,753 father-offspring pairs from the Norwegian HUNT-study. Anxiety and depression were assessed using the Hospital Anxiety and Depression Scale and suicide death from national registers. Associations between offspring and own body mass index and symptoms of anxiety and depression and suicide mortality were estimated using logistic and Cox regression. Causal effect estimates were estimated with a two sample instrument variable approach using offspring body mass index as an instrument for parental body mass index.Both own and offspring body mass index were positively associated with depression, while the results did not indicate any substantial association between body mass index and anxiety. Although precision was low, suicide mortality was inversely associated with own body mass index and the results from the analysis using offspring body mass index supported these results. Adjusted odds ratios per standard deviation body mass index from the instrumental variable analysis were 1.22 (95% CI: 1.05, 1.43 for depression, 1.10 (95% CI: 0.95, 1.27 for anxiety, and the instrumental variable estimated hazard ratios for suicide was 0.69 (95% CI: 0.30, 1.63.The present study's results indicate that suicide mortality is inversely associated with body mass index. We also found support for a positive association between body mass index and depression, but not

  11. Radon/radon-daughter measurement methods and instrumentation

    International Nuclear Information System (INIS)

    Rock, R.L.

    1977-01-01

    Radon-daughter measurement equipment and techniques have been continuously improved over the last 25 years. Improvements have been in the areas of accuracy, time and convenience. We now have miniaturized scalers and detectors available for measuring the alpha particle count rates from aerosol samples collected on filter papers. We also have small lightweight efficient pumps for conveniently collecting samples and we have various counting methods which allow us to choose between making very precise measurements or nominal measurements. Radon-daughter measurement methods used in uranium mines and mills are discussed including a personal radon-daughter-exposure integrating device which can be worn by miners

  12. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    International Nuclear Information System (INIS)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren

    2010-01-01

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  13. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    Energy Technology Data Exchange (ETDEWEB)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))

    2010-03-15

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  14. A method based on a separation of variables in magnetohydrodynamics (MHD); Une methode de separation des variables en magnetohydrodynamique

    Energy Technology Data Exchange (ETDEWEB)

    Cessenat, M.; Genta, P.

    1996-12-31

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables {phi}1, {phi}2, {phi}3 in addition of the time variable {tau} and then searching a solution which is separated with respect to {phi}1 and {tau} only. This is allowed by a very simple relation, called a `metric separation equation`, which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H{sub {Sigma}} of the magnetic field on the unit sphere {Sigma} by solving a non linear partial differential equation on {Sigma}. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors).

  15. Magnetic characterisation of recording materials: design, instrumentation and experimental methods

    NARCIS (Netherlands)

    Samwel, E.O.

    1995-01-01

    The progress being made in the field of magnetic recording is extremely fast. The need to keep this progress going, leads to new types of recording materials which require advanced measurement systems and measurement procedures. Furthermore, the existing measurement methods need to be reviewed as

  16. Chaos synchronization using single variable feedback based on backstepping method

    International Nuclear Information System (INIS)

    Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang

    2004-01-01

    In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective

  17. Method of charging instruments into liquid metal coolant

    International Nuclear Information System (INIS)

    Yamazaki, Hiroshi

    1980-01-01

    Purpose: To alleviate the thermal shock of a reactor charging machine when charging the machine into liquid metal coolant after the machine is preheated in cover gas. Method: When a reactor fueling machine reaches at the lowermost portion the position immediately above liquid metal coolant surface level, the machine is stopped moving down. The reactor fueling machine is heated at the lowermost portion by thermal radiation from the surface of the liquid metal coolant. After the machine is thus preheated in cover gas, it is again steadily moved down by a winch and charged into the liquid metal coolant. Therefore, the thermal shock of the machine becomes low when charging the machine into the liquid metal coolant to eliminate the damage and deformation at the machine. (Yoshihara, H.)

  18. Fatigue resistance of engine-driven rotary nickel-titanium instruments produced by new manufacturing methods.

    Science.gov (United States)

    Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca

    2008-08-01

    The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.

  19. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  20. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  1. Variable scaling method and Stark effect in hydrogen atom

    International Nuclear Information System (INIS)

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  2. Evaluation of surface characteristics of rotary nickel-titanium instruments produced by different manufacturing methods.

    Science.gov (United States)

    Inan, U; Gurel, M

    2017-02-01

    Instrument fracture is a serious concern in endodontic practice. The aim of this study was to investigate the surface quality of new and used rotary nickel-titanium (NiTi) instruments manufactured by the traditional grinding process and twisting methods. Total 16 instruments of two rotary NiTi systems were used in this study. Eight Twisted Files (TF) (SybronEndo, Orange, CA, USA) and 8 Mtwo (VDW, Munich, Germany) instruments were evaluated. New and used of 4 experimental groups were evaluated using an atomic force microscopy (AFM). New and used instruments were analyzed on 3 points along a 3 mm. section at the tip of the instrument. Quantitative measurements according to the topographical deviations were recorded. The data were statistically analyzed with paired samples t-test and independent samples t-test. Mean root mean square (RMS) values for new and used TF 25.06 files were 10.70 ± 2.80 nm and 21.58 ± 6.42 nm, respectively, and the difference between them was statistically significant (P instruments produced by twisting method (TF 25.06) had better surface quality than the instruments produced by traditional grinding process (Mtwo 25.06 files).

  3. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  4. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    Science.gov (United States)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize

  5. Nuclear medicine and imaging research (instrumentation and quantitative methods of evaluation)

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.; Chen, C.T.

    1992-07-01

    This document is the annual progress report for project entitled ''Instrumentation and Quantitative Methods of Evaluation.'' Progress is reported in separate sections individually abstracted and indexed for the database. Subject areas reported include theoretical studies of imaging systems and methods, hardware developments, quantitative methods of evaluation, and knowledge transfer: education in quantitative nuclear medicine imaging

  6. Wind resource in metropolitan France: assessment methods, variability and trends

    International Nuclear Information System (INIS)

    Jourdier, Benedicte

    2015-01-01

    France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr

  7. The radiation budget of stratocumulus clouds measured by tethered balloon instrumentation: Variability of flux measurements

    Science.gov (United States)

    Duda, David P.; Stephens, Graeme L.; Cox, Stephen K.

    1990-01-01

    Measurements of longwave and shortwave radiation were made using an instrument package on the NASA tethered balloon during the FIRE Marine Stratocumulus experiment. Radiation data from two pairs of pyranometers were used to obtain vertical profiles of the near-infrared and total solar fluxes through the boundary layer, while a pair of pyrgeometers supplied measurements of the longwave fluxes in the cloud layer. The radiation observations were analyzed to determine heating rates and to measure the radiative energy budget inside the stratocumulus clouds during several tethered balloon flights. The radiation fields in the cloud layer were also simulated by a two-stream radiative transfer model, which used cloud optical properties derived from microphysical measurements and Mie scattering theory.

  8. Measuring Instrument Constructs of Return Factors for Green Office Building Investments Variables Using Rasch Measurement Model

    Directory of Open Access Journals (Sweden)

    Isa Mona

    2016-01-01

    Full Text Available This paper is a preliminary study on rationalising green office building investments in Malaysia. The aim of this paper is attempt to introduce the application of Rasch measurement model analysis to determine the validity and reliability of each construct in the questionnaire. In achieving this objective, a questionnaire survey was developed consists of 6 sections and a total of 106 responses were received from various investors who own and lease office buildings in Kuala Lumpur. The Rasch Measurement analysis is used to measure the quality control of item constructs in the instrument by measuring the specific objectivity within the same dimension, to reduce ambiguous measures, and a realistic estimation of precision and implicit quality. The Rasch analysis consists of the summary statistics, item unidimensionality and item measures. A result shows the items and respondent (person reliability is at 0.91 and 0.95 respectively.

  9. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    Science.gov (United States)

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  10. A method for the deliberate and deliberative selection of policy instrument mixes for climate change adaptation

    Directory of Open Access Journals (Sweden)

    Heleen L. P. Mees

    2014-06-01

    Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.

  11. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  12. THE REGULATION OF MONEY CIRCULATION ON THE BASIS OF USING METHODS AND INSTRUMENTS OF MONETARY POLICY

    Directory of Open Access Journals (Sweden)

    S. Mishchenko

    2013-05-01

    Full Text Available In the article it was researched the instruments and mechanism of safeguarding stability of money market on the basis of implementing the optimal monetary policy regime. It was determined the main directions of appliance the monetary policy methods and instruments to guiding money market stability and it was also investigated the influence of transmission mechanism on providing the soundness of money circulations.

  13. A variable stiffness mechanism for steerable percutaneous instruments: integration in a needle.

    Science.gov (United States)

    De Falco, Iris; Culmone, Costanza; Menciassi, Arianna; Dankelman, Jenny; van den Dobbelsteen, John J

    2018-06-04

    Needles are advanced tools commonly used in minimally invasive medical procedures. The accurate manoeuvrability of flexible needles through soft tissues is strongly determined by variations in tissue stiffness, which affects the needle-tissue interaction and thus causes needle deflection. This work presents a variable stiffness mechanism for percutaneous needles capable of compensating for variations in tissue stiffness and undesirable trajectory changes. It is composed of compliant segments and rigid plates alternately connected in series and longitudinally crossed by four cables. The tensioning of the cables allows the omnidirectional steering of the tip and the stiffness tuning of the needle. The mechanism was tested separately under different working conditions, demonstrating a capability to exert up to 3.6 N. Afterwards, the mechanism was integrated into a needle, and the overall device was tested in gelatine phantoms simulating the stiffness of biological tissues. The needle demonstrated the capability to vary deflection (from 11.6 to 4.4 mm) and adapt to the inhomogeneity of the phantoms (from 21 to 80 kPa) depending on the activation of the variable stiffness mechanism. Graphical abstract ᅟ.

  14. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  15. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  16. An application of the variable-r method to subpopulation growth rates in a 19th century agricultural population

    Directory of Open Access Journals (Sweden)

    Corey Sparks

    2009-07-01

    Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.

  17. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  18. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    Science.gov (United States)

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  19. Breastfeeding and the risk of childhood asthma: A two-stage instrumental variable analysis to address endogeneity.

    Science.gov (United States)

    Sharma, Nivita D

    2017-09-01

    Several explanations for the inconsistent results on the effects of breastfeeding on childhood asthma have been suggested. The purpose of this study was to investigate one unexplored explanation, which is the presence of a potential endogenous relationship between breastfeeding and childhood asthma. Endogeneity exists when an explanatory variable is correlated with the error term for reasons such as selection bias, reverse causality, and unmeasured confounders. Unadjusted endogeneity will bias the effect of breastfeeding on childhood asthma. To investigate potential endogeneity, a cross-sectional study of breastfeeding practices and incidence of childhood asthma in 87 pediatric patients in Georgia, the USA, was conducted using generalized linear modeling and a two-stage instrumental variable analysis. First, the relationship between breastfeeding and childhood asthma was analyzed without considering endogeneity. Second, tests for presence of endogeneity were performed and having detected endogeneity between breastfeeding and childhood asthma, a two-stage instrumental variable analysis was performed. The first stage of this analysis estimated the duration of breastfeeding and the second-stage estimated the risk of childhood asthma. When endogeneity was not taken into account, duration of breastfeeding was found to significantly increase the risk of childhood asthma (relative risk ratio [RR]=2.020, 95% confidence interval [CI]: [1.143-3.570]). After adjusting for endogeneity, duration of breastfeeding significantly reduced the risk of childhood asthma (RR=0.003, 95% CI: [0.000-0.240]). The findings suggest that researchers should consider evaluating how the presence of endogeneity could affect the relationship between duration of breastfeeding and the risk of childhood asthma. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  20. 30 CFR 75.1719-3 - Methods of measurement; light measuring instruments.

    Science.gov (United States)

    2010-07-01

    ... being measured and a sufficient distance from the surface to allow the light sensing element in the... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Methods of measurement; light measuring... § 75.1719-3 Methods of measurement; light measuring instruments. (a) Compliance with § 75.1719-1(d...

  1. Method of case hardening depth testing by using multifunctional ultrasonic testing instrument

    International Nuclear Information System (INIS)

    Salchak, Y A; Sednev, D A; Ardashkin, I B; Kroening, M

    2015-01-01

    The paper describes usability of ultrasonic case hardening depth control applying standard instrument of ultrasonic inspections. The ultrasonic method of measuring the depth of the hardened layer is proposed. Experimental series within the specified and multifunctional ultrasonic equipment are performed. The obtained results are compared with the results of a referent method of analysis. (paper)

  2. Introducing instrumental variables in the LS-SVM based identification framework

    NARCIS (Netherlands)

    Laurain, V.; Zheng, W-X.; Toth, R.

    2011-01-01

    Least-Squares Support Vector Machines (LS-SVM) represent a promising approach to identify nonlinear systems via nonparametric estimation of the nonlinearities in a computationally and stochastically attractive way. All the methods dedicated to the solution of this problem rely on the minimization of

  3. Field estimation of soil water content. A practical guide to methods, instrumentation and sensor technology

    International Nuclear Information System (INIS)

    2008-01-01

    During a period of five years, an international group of soil water instrumentation experts were contracted by the International Atomic Energy Agency to carry out a range of comparative assessments of soil water sensing methods under laboratory and field conditions. The detailed results of those studies are published elsewhere. Most of the devices examined worked well some of the time, but most also performed poorly in some circumstances. The group was also aware that the choice of a water measurement technology is often made for economic, convenience and other reasons, and that there was a need to be able to obtain the best results from any device used. The choice of a technology is sometimes not made by the ultimate user, or even if it is, the main constraint may be financial rather than technical. Thus, this guide is presented in a way that allows the user to obtain the best performance from any instrument, while also providing guidance as to which instruments perform best under given circumstances. That said, this expert group of the IAEA reached several important conclusions: (1) the field calibrated neutron moisture meter (NMM) remains the most accurate and precise method for soil profile water content determination in the field, and is the only indirect method capable of providing accurate soil water balance data for studies of crop water use, water use efficiency, irrigation efficiency and irrigation water use efficiency, with a minimum number of access tubes; (2) those electromagnetic sensors known as capacitance sensors exhibit much more variability in the field than either the NMM or direct soil water measurements, and they are not recommended for soil water balance studies for this reason (impractically large numbers of access tubes and sensors are required) and because they are rendered inaccurate by changes in soil bulk electrical conductivity (including temperature effects) that often occur in irrigated soils, particularly those containing

  4. Calibration method based on direct radioactivity measurement for radioactive gas monitoring instruments

    International Nuclear Information System (INIS)

    Yoshida, Makoto; Ohi, Yoshihiro; Chida, Tohru; Wu, Youyang.

    1993-01-01

    A calibration method for radioactive gas monitoring instruments was studied. In the method, gaseous radioactivity standards were provided on the basis of the direct radioactivity measurement by the diffusion-in long proportional counter method (DLPC method). The radioactivity concentration of the gas mixture through a monitoring instrument was determined by sampling the known volume of the gas mixture into the proportional counter used for the DLPC method. Since oxygen in the gas mixture decreased the counting efficiency in a proportional counter, the influence on calibration was experimentally estimated. It was not serious and able to be easily corrected. By the present method, the relation between radioactivity concentration and ionization current was determined for a gas-flow ionization chamber with 1.5 l effective volume. It showed good agreement with the results in other works. (author)

  5. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  6. Pre-validation methods for developing a patient reported outcome instrument

    Directory of Open Access Journals (Sweden)

    Castillo Mayret M

    2011-08-01

    Full Text Available Abstract Background Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology. Methods The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ. The five steps are: 1 Generation of a pool of items; 2 Item de-duplication (three phases; 3 Item reduction (two phases; 4 Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF; and 5 qualitative exploration of the target populations' views of the new instrument and the items it contains. Results The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items. Conclusions This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped

  7. A new method for the radiation representation of musical instruments in auralizations

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Otondo, Felipe

    2005-01-01

    A new method for the representation of sound sources that vary their directivity in time in auralizations is introduced. A recording method with multi-channel anechoic recordings is proposed in connection with the use of a multiple virtual source reproduction system in auralizations. Listening ex...... to be significant. Further applications of the method are considered for ensembles within room auralizations as well as in the field of studio recording techniques for large instruments....

  8. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  9. Method for curing polymers using variable-frequency microwave heating

    Science.gov (United States)

    Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.

    1998-01-01

    A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.

  10. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  11. FJ-2207 measuring instrument detection pipe surface a level of pollution method

    International Nuclear Information System (INIS)

    Wang Jiangong

    2010-01-01

    On the pipe surface contamination were detected α level of pollution is a frequently encountered dose-detection work. Because the pipeline surface arc, while the measuring probe for the plane, which for accurate measurement difficult. In this paper, on the FJ-2207-type pipe surface contamination measuring instrument measuring pollution levels in the α method was studied. Introduced the FJ-2207 measuring instrument detection pipe surface α pollution levels. Studied this measuring instrument on the same sources of surface, plane α level of radioactivity measured differences in the results obtained control of the apparatus when the direct measurement of the surface correction factor, and gives 32-216 specifications commonly used pipe direct measurement of the amendment factor. Convenient method, test results are reliable for the accurate measurement of pipe pollution levels in the surface of α as a reference and learning. (authors)

  12. USAGE OF PICTOGRAMS TO INTRODUCE MUSICAL INSTRUMENTS TO EDUCABLE MENTALLY RETARDED CHILDREN AS AN ALTERNATIVE METHOD

    Directory of Open Access Journals (Sweden)

    Gunsu YILMA

    2014-01-01

    Full Text Available The purpose of this research is to examine and investigate the perception ability of musical instruments of educable mentally retarded children with the support of visual elements. The research is conducted for every children individually in a special education and rehabilitation centre. The problematic of this research is the level of perception ability of musical instruments with visual support on mild mentally retarded children. In this research, perception ability of defining pictograms by music is introduced as an alternative method. It is researched that how educable mentally retarded children perceive pictograms by music tools. In this case, it is aimed to introduce musical instruments to educable mentally retarded children by pictograms with music. The research is applied with a qualitative approach. Data were obtained with the recorder, then they were turned into texts and analyzed with content analysis method.

  13. A new method for the assessment of the surface topography of NiTi rotary instruments.

    Science.gov (United States)

    Ferreira, F; Barbosa, I; Scelza, P; Russano, D; Neff, J; Montagnana, M; Zaccaro Scelza, M

    2017-09-01

    To describe a new method for the assessment of nanoscale alterations in the surface topography of nickel-titanium endodontic instruments using a high-resolution optical method and to verify the accuracy of the technique. Noncontact three-dimensional optical profilometry was used to evaluate defects on a size 25, .08 taper reciprocating instrument (WaveOne ® ), which was subjected to a cyclic fatigue test in a simulated root canal in a clear resin block. For the investigation, an original procedure was established for the analysis of similar areas located 3 mm from the tip of the instrument before and after canal preparation to enable the repeatability and reproducibility of the measurements with precision. All observations and analysis were taken in areas measuring 210 × 210 μm provided by the software of the equipment. The three-dimensional high-resolution image analysis showed clear alterations in the surface topography of the examined cutting blade and flute of the instrument, before and after use, with the presence of surface irregularities such as deformations, debris, grooves, cracks, steps and microcavities. Optical profilometry provided accurate qualitative nanoscale evaluation of similar surfaces before and after the fatigue test. The stability and repeatability of the technique enables a more comprehensive understanding of the effects of wear on the surface of endodontic instruments. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  14. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  15. Field astrobiology research instruments and methods in moon-mars analogue site.

    NARCIS (Netherlands)

    Foing, B.H.; Stoker, C.; Zavaleta, J.; Ehrenfreund, P.; Sarrazin, P.; Blake, D.; Page, J.; Pletser, V.; Hendrikse, J.; Oliveira Lebre Direito, M.S.; Kotler, M.; Martins, Z.; Orzechowska, G.; Thiel, C.S.; Clarke, J.; Gross, J.; Wendt, L.; Borst, A.; Peters, S.; Wilhelm, M.-B.; Davies, G.R.; EuroGeoMars 2009 Team, ILEWG

    2011-01-01

    We describe the field demonstration of astrobiology instruments and research methods conducted in and from the Mars Desert Research Station (MDRS) in Utah during the EuroGeoMars campaign 2009 coordinated by ILEWG, ESA/ESTEC and NASA Ames, with the contribution of academic partners. We discuss the

  16. The effects of competition on premiums: using United Healthcare's 2015 entry into Affordable Care Act's marketplaces as an instrumental variable.

    Science.gov (United States)

    Agirdas, Cagdas; Krebs, Robert J; Yano, Masato

    2018-01-08

    One goal of the Affordable Care Act is to increase insurance coverage by improving competition and lowering premiums. To facilitate this goal, the federal government enacted online marketplaces in the 395 rating areas spanning 34 states that chose not to establish their own state-run marketplaces. Few multivariate regression studies analyzing the effects of competition on premiums suffer from endogeneity, due to simultaneity and omitted variable biases. However, United Healthcare's decision to enter these marketplaces in 2015 provides the researcher with an opportunity to address this endogeneity problem. Exploiting the variation caused by United Healthcare's entry decision as an instrument for competition, we study the impact of competition on premiums during the first 2 years of these marketplaces. Combining panel data from five different sources and controlling for 12 variables, we find that one more insurer in a rating area leads to a 6.97% reduction in the second-lowest-priced silver plan premium, which is larger than the estimated effects in existing literature. Furthermore, we run a threshold analysis and find that competition's effects on premiums become statistically insignificant if there are four or more insurers in a rating area. These findings are robust to alternative measures of premiums, inclusion of a non-linear term in the regression models and a county-level analysis.

  17. Feasibility of wavelet expansion methods to treat the energy variable

    International Nuclear Information System (INIS)

    Van Rooijen, W. F. G.

    2012-01-01

    This paper discusses the use of the Discrete Wavelet Transform (DWT) to implement a functional expansion of the energy variable in neutron transport. The motivation of the work is to investigate the possibility of adapting the expansion level of the neutron flux in a material region to the complexity of the cross section in that region. If such an adaptive treatment is possible, 'simple' material regions (e.g., moderator regions) require little effort, while a detailed treatment is used for 'complex' regions (e.g., fuel regions). Our investigations show that in fact adaptivity cannot be achieved. The most fundamental reason is that in a multi-region system, the energy dependence of the cross section in a material region does not imply that the neutron flux in that region has a similar energy dependence. If it is chosen to sacrifice adaptivity, then the DWT method can be very accurate, but the complexity of such a method is higher than that of an equivalent hyper-fine group calculation. The conclusion is thus that, unfortunately, the DWT approach is not very practical. (authors)

  18. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  19. A review of modern instrumental methods of elemental analysis of petroleum related material. Part 2

    International Nuclear Information System (INIS)

    Nadkarni, R.A.

    1991-01-01

    In this paper a review is presented of the state of the art in elemental analysis of petroleum-related materials (crude oil, gasoline, additives, and lubricants) using modern instrumental analysis techniques. The major instrumental techniques used for elemental analysis of petroleum products include atomic absorption spectrometry (both with flame and with graphite furnace atomizer), inductively coupled plasma atomic emission spectrometry, ion chromatography, microelemental methods, neutron activation, spark source mass spectrometry, and x-ray fluorescence. Each of these techniques is compared for its advantages, disadvantages, and typical applications in the petroleum field

  20. New highly sensitive method of simultaneous instrumental neutron activation determination of 12 microelements in vine

    International Nuclear Information System (INIS)

    Shoniya, N.I.

    1977-01-01

    The main principles and methods of simultaneous multi-element instrumental neutron activation determination of microelements in vine seeds are presented. The methods permit to carry out quantitative evaluation for every single corn of the seeds. It is shown that the method of instrumental neutron activation analysis with the utilization of a semiconductor spectrometer of high resolution and mini electronic computer permit to carry out serial determinations of 12 microelements in the individual corns of vine seeds of different sorts. This method will permit to determine the missing or excess content of this or that biologically important microelement in soils, plants, fruit and genetic material - seeds, and so to determine the optimum conditions of growing plants by applying microelement fertilizers as extra nutrient means

  1. Design and operation of dust measuring instrumentation based on the beta-radiation method

    International Nuclear Information System (INIS)

    Lilienfeld, P.

    1975-01-01

    The theory, instrument design aspects and applications of beta-radiation attenuation for the measurement of the mass concentration of airborne particulates are reviewed. Applicable methods of particle collection, beta sensing configurations, source ( 63 Ni, 14 C, 147 Pr, 85 Kr) and detector design criteria, electronic signal processing, digital control and instrument programming techniques are treated. Advantages, limitations and error sources of beta-attenuation instrumentation are analyzed. Applications to industrial dust measurements, source testing, ambient monitoring, and particle size analysis are the major areas of practical utilization of this technique, and its inherent capability for automated and unattended operation provides compatibility with process control synchronization and alarm, telemetry, and incorporation into pollution monitoring network sensing stations. (orig.) [de

  2. Experimental innovations in surface science a guide to practical laboratory methods and instruments

    CERN Document Server

    Yates, John T

    2015-01-01

    This book is a new edition of a classic text on experimental methods and instruments in surface science. It offers practical insight useful to chemists, physicists, and materials scientists working in experimental surface science. This enlarged second edition contains almost 300 descriptions of experimental methods. The more than 50 active areas with individual scientific and measurement concepts and activities relevant to each area are presented in this book. The key areas covered are: Vacuum System Technology, Mechanical Fabrication Techniques, Measurement Methods, Thermal Control, Delivery of Adsorbates to Surfaces, UHV Windows, Surface Preparation Methods, High Area Solids, Safety. The book is written for researchers and graduate students.

  3. Rare earths analysis of rock samples by instrumental neutron activation analysis, internal standard method

    International Nuclear Information System (INIS)

    Silachyov, I.

    2016-01-01

    The application of instrumental neutron activation analysis for the determination of long-lived rare earth elements (REE) in rock samples is considered in this work. Two different methods are statistically compared: the well established external standard method carried out using standard reference materials, and the internal standard method (ISM), using Fe, determined through X-ray fluorescence analysis, as an element-comparator. The ISM proved to be the more precise method for a wide range of REE contents and can be recommended for routine practice. (author)

  4. Treatment of thoracolumbar burst fractures with variable screw placement or Isola instrumentation and arthrodesis: case series and literature review.

    Science.gov (United States)

    Alvine, Gregory F; Swain, James M; Asher, Marc A; Burton, Douglas C

    2004-08-01

    The controversy of burst fracture surgical management is addressed in this retrospective case study and literature review. The series consisted of 40 consecutive patients, index included, with 41 fractures treated with stiff, limited segment transpedicular bone-anchored instrumentation and arthrodesis from 1987 through 1994. No major acute complications such as death, paralysis, or infection occurred. For the 30 fractures with pre- and postoperative computed tomography studies, spinal canal compromise was 61% and 32%, respectively. Neurologic function improved in 7 of 14 patients (50%) and did not worsen in any. The principal problem encountered was screw breakage, which occurred in 16 of the 41 (39%) instrumented fractures. As we have previously reported, transpedicular anterior bone graft augmentation significantly decreased variable screw placement (VSP) implant breakage. However, it did not prevent Isola implant breakage in two-motion segment constructs. Compared with VSP, Isola provided better sagittal plane realignment and constructs that have been found to be significantly stiffer. Unplanned reoperation was necessary in 9 of the 40 patients (23%). At 1- and 2-year follow-up, 95% and 79% of patients were available for study, and a satisfactory outcome was achieved in 84% and 79%, respectively. These satisfaction and reoperation rates are consistent with the literature of the time. Based on these observations and the loads to which implant constructs are exposed following posterior realignment and stabilization of burst fractures, we recommend that three- or four-motion segment constructs, rather than two motion, be used. To save valuable motion segments, planned construct shortening can be used. An alternative is sequential or staged anterior corpectomy and structural grafting.

  5. Electromagnetic variable degrees of freedom actuator systems and methods

    Science.gov (United States)

    Montesanti, Richard C [Pleasanton, CA; Trumper, David L [Plaistow, NH; Kirtley, Jr., James L.

    2009-02-17

    The present invention provides a variable reluctance actuator system and method that can be adapted for simultaneous rotation and translation of a moving element by applying a normal-direction magnetic flux on the moving element. In a beneficial example arrangement, the moving element includes a swing arm that carries a cutting tool at a set radius from an axis of rotation so as to produce a rotary fast tool servo that provides a tool motion in a direction substantially parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. An actuator rotates a swing arm such that a cutting tool moves toward and away from a mounted rotating workpiece in a controlled manner in order to machine the workpiece. Position sensors provide rotation and displacement information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in feed slide of a precision lathe.

  6. Evaluation of two disinfection/sterilization methods on silicon rubber-based composite finishing instruments.

    Science.gov (United States)

    Lacerda, Vánia A; Pereira, Leandro O; Hirata JUNIOR, Raphael; Perez, Cesar R

    2015-12-01

    To evaluate the effectiveness of disinfection/sterilization methods and their effects on polishing capacity, micomorphology, and composition of two different composite fiishing and polishing instruments. Two brands of finishing and polishing instruments (Jiffy and Optimize), were analyzed. For the antimicrobial test, 60 points (30 of each brand) were used for polishing composite restorations and submitted to three different groups of disinfection/sterilization methods: none (control), autoclaving, and immersion in peracetic acid for 60 minutes. The in vitro tests were performed to evaluate the polishing performance on resin composite disks (Amelogen) using a 3D scanner (Talyscan) and to evaluate the effects on the points' surface composition (XRF) and micromorphology (MEV) after completing a polishing and sterilizing routine five times. Both sterilization/disinfection methods were efficient against oral cultivable organisms and no deleterious modification was observed to point surface.

  7. Variability of floods, droughts and windstorms over the past 500 years in Central Europe based on documentary and instrumental data

    Science.gov (United States)

    Brazdil, Rudolf

    2016-04-01

    Hydrological and meteorological extremes (HMEs) in Central Europe during the past 500 years can be reconstructed based on instrumental and documentary data. Documentary data about weather and related phenomena represent the basic source of information for historical climatology and hydrology, dealing with reconstruction of past climate and HMEs, their perception and impacts on human society. The paper presents the basic distribution of documentary data on (i) direct descriptions of HMEs and their proxies on the one hand and on (ii) individual and institutional data sources on the other. Several groups of documentary evidence such as narrative written records (annals, chronicles, memoirs), visual daily weather records, official and personal correspondence, special prints, financial and economic records (with particular attention to taxation data), newspapers, pictorial documentation, chronograms, epigraphic data, early instrumental observations, early scientific papers and communications are demonstrated with respect to extraction of information about HMEs, which concerns usually of their occurrence, severity, seasonality, meteorological causes, perception and human impacts. The paper further presents the analysis of 500-year variability of floods, droughts and windstorms on the base of series, created by combination of documentary and instrumental data. Results, advantages and drawbacks of such approach are documented on the examples from the Czech Lands. The analysis of floods concentrates on the River Vltava (Prague) and the River Elbe (Děčín) which show the highest frequency of floods occurring in the 19th century (mainly of winter synoptic type) and in the second half of the 16th century (summer synoptic type). Reported are also the most disastrous floods (August 1501, March and August 1598, February 1655, June 1675, February 1784, March 1845, February 1862, September 1890, August 2002) and the European context of floods in the severe winter 1783/84. Drought

  8. An ergonomics based design research method for the arrangement of helicopter flight instrument panels.

    Science.gov (United States)

    Alppay, Cem; Bayazit, Nigan

    2015-11-01

    In this paper, we study the arrangement of displays in flight instrument panels of multi-purpose civil helicopters following a user-centered design method based on ergonomics principles. Our methodology can also be described as a user-interface arrangement methodology based on user opinions and preferences. This study can be outlined as gathering user-centered data using two different research methods and then analyzing and integrating the collected data to come up with an optimal instrument panel design. An interview with helicopter pilots formed the first step of our research. In that interview, pilots were asked to provide a quantitative evaluation of basic interface arrangement principles. In the second phase of the research, a paper prototyping study was conducted with same pilots. The final phase of the study entailed synthesizing the findings from interviews and observational studies to formulate an optimal flight instrument arrangement methodology. The primary results that we present in our paper are the methodology that we developed and three new interface arrangement concepts, namely relationship of inseparability, integrated value and locational value. An optimum instrument panel arrangement is also proposed by the researchers. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. A multi-criteria evaluation method for climate change mitigation policy instruments

    International Nuclear Information System (INIS)

    Konidari, Popi; Mavrakis, Dimitrios

    2007-01-01

    This paper presents an integrated multi-criteria analysis method for the quantitative evaluation of climate change mitigation policy instruments. The method consists of: (i) a set of criteria supported by sub-criteria, all of which describe the complex framework under which these instruments are selected by policy makers and implemented, (ii) an Analytical Hierarchy Process (AHP) process for defining weight coefficients for criteria and sub-criteria according to the preferences of three stakeholders groups and (iii) a Multi-Attribute Theory (MAUT)/Simple Multi-Attribute Ranking Technique (SMART) process for assigning grades to each instrument that is evaluated for its performance under a specific sub-criterion. Arguments for the selected combination of these standard methods and definitions for criteria/sub-criteria are quoted. Consistency and robustness tests are performed. The functionality of the proposed method is tested by assessing the aggregate performances of the EU emission trading scheme at Denmark, Germany, Greece, Italy, Netherlands, Portugal, Sweden and United Kingdom. Conclusions are discussed

  10. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  11. Validation parameters of instrumental method for determination of total bacterial count in milk

    Directory of Open Access Journals (Sweden)

    Nataša Mikulec

    2004-10-01

    Full Text Available The method of flow citometry as rapid, instrumental and routine microbiological method is used for determination of total bacterial count in milk. The results of flow citometry are expressed as individual bacterial cells count. Problems regarding the interpretation of the results of total bacterial count can be avoided by transformation of the results of flow citometry method onto the scale of reference method (HRN ISO 6610:2001.. The method of flow citometry, like any analitycal method, according to the HRN EN ISO/IEC 17025:2000 standard, requires validation and verification. This paper describes parameters of validation: accuracy, precision, specificity, range, robustness and measuring uncertainty for the method of flow citometry.

  12. Infectious complications in head and neck cancer patients treated with cetuximab: propensity score and instrumental variable analysis.

    Directory of Open Access Journals (Sweden)

    Ching-Chih Lee

    Full Text Available BACKGROUND: To compare the infection rates between cetuximab-treated patients with head and neck cancers (HNC and untreated patients. METHODOLOGY: A national cohort of 1083 HNC patients identified in 2010 from the Taiwan National Health Insurance Research Database was established. After patients were followed for one year, propensity score analysis and instrumental variable analysis were performed to assess the association between cetuximab therapy and the infection rates. RESULTS: HNC patients receiving cetuximab (n = 158 were older, had lower SES, and resided more frequently in rural areas as compared to those without cetuximab therapy. 125 patients, 32 (20.3% in the group using cetuximab and 93 (10.1% in the group not using it presented infections. The propensity score analysis revealed a 2.3-fold (adjusted odds ratio [OR] = 2.27; 95% CI, 1.46-3.54; P = 0.001 increased risk for infection in HNC patients treated with cetuximab. However, using IVA, the average treatment effect of cetuximab was not statistically associated with increased risk of infection (OR, 0.87; 95% CI, 0.61-1.14. CONCLUSIONS: Cetuximab therapy was not statistically associated with infection rate in HNC patients. However, older HNC patients using cetuximab may incur up to 33% infection rate during one year. Particular attention should be given to older HNC patients treated with cetuximab.

  13. Is foreign direct investment good for health in low and middle income countries? An instrumental variable approach.

    Science.gov (United States)

    Burns, Darren K; Jones, Andrew P; Goryakin, Yevgeniy; Suhrcke, Marc

    2017-05-01

    There is a scarcity of quantitative research into the effect of FDI on population health in low and middle income countries (LMICs). This paper investigates the relationship using annual panel data from 85 LMICs between 1974 and 2012. When controlling for time trends, country fixed effects, correlation between repeated observations, relevant covariates, and endogeneity via a novel instrumental variable approach, we find FDI to have a beneficial effect on overall health, proxied by life expectancy. When investigating age-specific mortality rates, we find a stronger beneficial effect of FDI on adult mortality, yet no association with either infant or child mortality. Notably, FDI effects on health remain undetected in all models which do not control for endogeneity. Exploring the effect of sector-specific FDI on health in LMICs, we provide preliminary evidence of a weak inverse association between secondary (i.e. manufacturing) sector FDI and overall life expectancy. Our results thus suggest that FDI has provided an overall benefit to population health in LMICs, particularly in adults, yet investments into the secondary sector could be harmful to health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development

    Science.gov (United States)

    Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei

    2014-01-01

    Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring

  15. An Instrumental Variable Probit (IVP Analysis on Depressed Mood in Korea: The Impact of Gender Differences and Other Socio-Economic Factors

    Directory of Open Access Journals (Sweden)

    Lara Gitto

    2015-08-01

    Full Text Available Background Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people’s quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression, might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Methods Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females, aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status and socio

  16. Precision of GNSS instruments by static method comparing in real time

    Directory of Open Access Journals (Sweden)

    Slavomír Labant

    2009-09-01

    Full Text Available Tablet paper describes comparison of measuring accuracy two apparatus from the firm Leica. One of them recieve signals onlyfrom GPS satelites and another instrument is working with GPS and also with GLONASS satelites. Measuring is carry out by RTK staticmethod with 2 minutes observations. Measurement processing is separated to X, Y (position and h (heigh. Adjustment of directobservations is used as a adjusting method.

  17. Instruments and methods of scintillation spectra processing for radiation control tasks

    International Nuclear Information System (INIS)

    Antropov, S.Yu.; Ermilov, A.P.; Ermilov, S.A.; Komarov, N.A.; Krokhin, I.I.

    2005-01-01

    On gamma-spectrometer the response function could be calculated on the base of sensitivity data, energy resolution and form of Compton spectrum part. On the scintillation gamma-spectrometer with Na-I(Tl) crystal 63x63 mm the method allows to divide the 5-10 components mixtures, and on the beta-spectrometer of 2-3 component mixtures. The approach is realized in the 'Progress' program-instrumental complexes

  18. Impact of Uniform Methods on Interlaboratory Antibody Titration Variability: Antibody Titration and Uniform Methods.

    Science.gov (United States)

    Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H

    2017-01-01

    -Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.

  19. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  20. Developments in FT-ICR MS instrumentation, ionization techniques, and data interpretation methods for petroleomics.

    Science.gov (United States)

    Cho, Yunju; Ahmed, Arif; Islam, Annana; Kim, Sunghwan

    2015-01-01

    Because of the increasing importance of heavy and unconventional crude oil as an energy source, there is a growing need for petroleomics: the pursuit of more complete and detailed knowledge of the chemical compositions of crude oil. Crude oil has an extremely complex nature; hence, techniques with ultra-high resolving capabilities, such as Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS), are necessary. FT-ICR MS has been successfully applied to the study of heavy and unconventional crude oils such as bitumen and shale oil. However, the analysis of crude oil with FT-ICR MS is not trivial, and it has pushed analysis to the limits of instrumental and methodological capabilities. For example, high-resolution mass spectra of crude oils may contain over 100,000 peaks that require interpretation. To visualize large data sets more effectively, data processing methods such as Kendrick mass defect analysis and statistical analyses have been developed. The successful application of FT-ICR MS to the study of crude oil has been critically dependent on key developments in FT-ICR MS instrumentation and data processing methods. This review offers an introduction to the basic principles, FT-ICR MS instrumentation development, ionization techniques, and data interpretation methods for petroleomics and is intended for readers having no prior experience in this field of study. © 2014 Wiley Periodicals, Inc.

  1. The use of a combination of instrumental methods to assess change in sensory crispness during storage of a "Honeycrisp" apple breeding family.

    Science.gov (United States)

    Chang, Hsueh-Yuan; Vickers, Zata M; Tong, Cindy B S

    2018-04-01

    Loss of crispness in apple fruit during storage reduces the fruit's fresh sensation and consumer acceptance. Apple varieties that maintain crispness thus have higher potential for longer-term consumer appeal. To efficiently phenotype crispness, several instrumental methods have been tested, but variable results were obtained when different apple varieties were assayed. To extend these studies, we assessed the extent to which instrumental measurements correlate to and predict sensory crispness, with a focus on crispness maintenance. We used an apple breeding family derived from a cross between "Honeycrisp" and "MN1764," which segregates for crispness maintenance. Three types of instrumental measurements (puncture, snapping, and mechanical-acoustic tests) and sensory evaluation were performed on fruit at harvest and after 8 weeks of cold storage. Overall, 20 genotypes from the family and the 2 parents were characterized by 19 force and acoustic measures. In general, crispness was more related to force than to acoustic measures. Force linear distance and maximum force as measured by the mechanical-acoustic test were best correlated with sensory crispness and change in crispness, respectively. The correlations varied by apple genotype. The best multiple linear regression model to predict change in sensory crispness between harvest and storage of fruit of this breeding family incorporated both force and acoustic measures. This work compared the abilities of instrumental tests to predict sensory crispness maintenance of apple fruit. The use of an instrumental method that is highly correlated to sensory crispness evaluation can enhance the efficiency and reduce the cost of measuring crispness for breeding purposes. This study showed that sensory crispness and change in crispness after storage of an apple breeding family were reliably predicted with a combination of instrumental measurements and multiple variable analyses. The strategy potentially can be applied to other

  2. Unexpected but Most Welcome: Mixed Methods for the Validation and Revision of the Participatory Evaluation Measurement Instrument

    Science.gov (United States)

    Daigneault, Pierre-Marc; Jacob, Steve

    2014-01-01

    Although combining methods is nothing new, more contributions about why and how to mix methods for validation purposes are needed. This article presents a case of validating the inferences drawn from the Participatory Evaluation Measurement Instrument, an instrument that purports to measure stakeholder participation in evaluation. Although the…

  3. Instrumentation to Measure the Capacitance of Biosensors by Sinusoidal Wave Method

    Directory of Open Access Journals (Sweden)

    Pavan Kumar KATHUROJU

    2009-09-01

    Full Text Available Micro Controller based instrumentation to measure the capacitance of biosensors is developed. It is based on frequency domain technique with sinusoidal wave input. Changes in the capacitance of biosensor because of the analyte specific reaction are calculated by knowing the current flowing through the sample. A dedicated 8-bit microcontroller (AT89C52 and its associated peripherals are employed for the hardware and application specific software is developed in ‘C’ language. The paper describes the methodology, instrumentation details along with a specific application to glucose sensing. The measurements are conducted with glucose oxidase based capacitance biosensor and the obtained results are compared with the conventional method of sugar measurements using the UV-Visible spectroscopy (Phenol-Sulphuric acid assay method. Measurement accuracy of the instrument is found to be ± 5 %. Experiments are conducted on glucose sensor with different bias voltages. It is found that for bias voltages varying from 0.5 to 0.7 Volt, the measurements are good for this application.

  4. Studies on the instrumental neutron activation analysis by cadmium ratio method and pair comparator method

    Energy Technology Data Exchange (ETDEWEB)

    Chao, H E; Lu, W D; Wu, S C

    1977-12-01

    The cadmium ratio method and pair comparator method provide a solution for the effects on the effective activation factors resulting from the variation of neutron spectrum at different irradiation positions as usually encountered in the single comparator method. The relations between the activation factors and neutron spectrum in terms of cadmium ratio of the comparator Au or of the activation factor of Co-Au pair for the elements, Sc, Cr, Mn, Co, La, Ce, Sm, and Th have been determined. The activation factors of the elements at any irradiation position can then be obtained from the cadmium ratio of the comparator and/or the activation factor of the comparator pair. The relations determined should be able to apply to different reactors and/or different positions of a reactor. It is shown that, for the isotopes /sup 46/Sc, /sup 51/Cr, /sup 56/Mn, /sup 60/Co, /sup 140/La, /sup 141/Ce, /sup 153/Sm and /sup 233/Pa, the thermal neutron activation factors determined by these two methods were generally in agreement with theoretical values. Their I/sub 0//sigma/sub th/ values appeared to agree with literature values also. The methods were applied to determine the contents of elements Sc, Cr, Mn, La, Ce, Sm, and Th in U.S.G.S. Standard Rock G-2, and the results were also in agreement with literature values. The cadmium ratio method and pair comparator method improved the single comparator method, and they are more suitable to analysis for multi-elements of a large number of samples.

  5. Instrumental variable analysis as a complementary analysis in studies of adverse effects : venous thromboembolism and second-generation versus third-generation oral contraceptives

    NARCIS (Netherlands)

    Boef, Anna G C; Souverein, Patrick C|info:eu-repo/dai/nl/243074948; Vandenbroucke, Jan P; van Hylckama Vlieg, Astrid; de Boer, Anthonius|info:eu-repo/dai/nl/075097346; le Cessie, Saskia; Dekkers, Olaf M

    2016-01-01

    PURPOSE: A potentially useful role for instrumental variable (IV) analysis may be as a complementary analysis to assess the presence of confounding when studying adverse drug effects. There has been discussion on whether the observed increased risk of venous thromboembolism (VTE) for

  6. A method for automating calibration and records management for instrumentation and dosimetry

    International Nuclear Information System (INIS)

    O'Brien, J.M. Jr.; Rushton, R.O.; Burns, R.E. Jr.

    1993-01-01

    Current industry requirements are becoming more stringent on quality assurance records and documentation for calibration of instruments and dosimetry. A novel method is presented here that will allow a progressive automation scheme to be used in pursuit of that goal. This concept is based on computer-controlled irradiators that can act as stand-alone devices or be interfaced to other components via a computer local area network. In this way, complete systems can be built with modules to create a records management system to meet the needs of small laboratories or large multi-building calibration groups. Different database engines or formats can be used simply by replacing a module. Modules for temperature and pressure monitoring or shipping and receiving can be added, as well as equipment modules for direct IEEE-488 interface to electrometers and other instrumentation

  7. A method for automating calibration and records management for instrumentation and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    O`Brien, J.M. Jr.; Rushton, R.O.; Burns, R.E. Jr. [Atlan-Tech, Inc., Roswell, GA (United States)

    1993-12-31

    Current industry requirements are becoming more stringent on quality assurance records and documentation for calibration of instruments and dosimetry. A novel method is presented here that will allow a progressive automation scheme to be used in pursuit of that goal. This concept is based on computer-controlled irradiators that can act as stand-alone devices or be interfaced to other components via a computer local area network. In this way, complete systems can be built with modules to create a records management system to meet the needs of small laboratories or large multi-building calibration groups. Different database engines or formats can be used simply by replacing a module. Modules for temperature and pressure monitoring or shipping and receiving can be added, as well as equipment modules for direct IEEE-488 interface to electrometers and other instrumentation.

  8. Reliability of the input admittance of bowed-string instruments measured by the hammer method.

    Science.gov (United States)

    Zhang, Ailin; Woodhouse, Jim

    2014-12-01

    The input admittance at the bridge, measured by hammer testing, is often regarded as the most useful and convenient measurement of the vibrational behavior of a bowed string instrument. However, this method has been questioned, due especially to differences between human bowing and hammer impact. The goal of the research presented here is to investigate the reliability and accuracy of this classic hammer method. Experimental studies were carried out on cellos, with three different driving conditions and three different boundary conditions. Results suggest that there is nothing fundamentally different about the hammer method, compared to other kinds of excitation. The third series of experiments offers an opportunity to explore the difference between the input admittance measuring from one bridge corner to another and that of single strings. The classic measurement is found to give a reasonable approximation to that of all four strings. Some possible differences between the hammer method and normal bowing and implications of the acoustical results are also discussed.

  9. Regularized variable metric method versus the conjugate gradient method in solution of radiative boundary design problem

    International Nuclear Information System (INIS)

    Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.

    2007-01-01

    In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas

  10. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  11. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  12. Instrumental variable estimation of the causal effect of plasma 25-hydroxy-vitamin D on colorectal cancer risk: a mendelian randomization analysis.

    Directory of Open Access Journals (Sweden)

    Evropi Theodoratou

    Full Text Available Vitamin D deficiency has been associated with several common diseases, including cancer and is being investigated as a possible risk factor for these conditions. We reported the striking prevalence of vitamin D deficiency in Scotland. Previous epidemiological studies have reported an association between low dietary vitamin D and colorectal cancer (CRC. Using a case-control study design, we tested the association between plasma 25-hydroxy-vitamin D (25-OHD and CRC (2,001 cases, 2,237 controls. To determine whether plasma 25-OHD levels are causally linked to CRC risk, we applied the control function instrumental variable (IV method of the mendelian randomization (MR approach using four single nucleotide polymorphisms (rs2282679, rs12785878, rs10741657, rs6013897 previously shown to be associated with plasma 25-OHD. Low plasma 25-OHD levels were associated with CRC risk in the crude model (odds ratio (OR: 0.76, 95% Confidence Interval (CI: 0.71, 0.81, p: 1.4×10(-14 and after adjusting for age, sex and other confounding factors. Using an allele score that combined all four SNPs as the IV, the estimated causal effect was OR 1.16 (95% CI 0.60, 2.23, whilst it was 0.94 (95% CI 0.46, 1.91 and 0.93 (0.53, 1.63 when using an upstream (rs12785878, rs10741657 and a downstream allele score (rs2282679, rs6013897, respectively. 25-OHD levels were inversely associated with CRC risk, in agreement with recent meta-analyses. The fact that this finding was not replicated when the MR approach was employed might be due to weak instruments, giving low power to demonstrate an effect (<0.35. The prevalence and degree of vitamin D deficiency amongst individuals living in northerly latitudes is of considerable importance because of its relationship to disease. To elucidate the effect of vitamin D on CRC cancer risk, additional large studies of vitamin D and CRC risk are required and/or the application of alternative methods that are less sensitive to weak instrument

  13. Improved retrieval of cloud base heights from ceilometer using a non-standard instrument method

    Science.gov (United States)

    Wang, Yang; Zhao, Chuanfeng; Dong, Zipeng; Li, Zhanqing; Hu, Shuzhen; Chen, Tianmeng; Tao, Fa; Wang, Yuzhao

    2018-04-01

    Cloud-base height (CBH) is a basic cloud parameter but has not been measured accurately, especially under polluted conditions due to the interference of aerosol. Taking advantage of a comprehensive field experiment in northern China in which a variety of advanced cloud probing instruments were operated, different methods of detecting CBH are assessed. The Micro-Pulse Lidar (MPL) and the Vaisala ceilometer (CL51) provided two types of backscattered profiles. The latter has been employed widely as a standard means of measuring CBH using the manufacturer's operational algorithm to generate standard CBH products (CL51 MAN) whose quality is rigorously assessed here, in comparison with a research algorithm that we developed named value distribution equalization (VDE) algorithm. It was applied to both the profiles of lidar backscattering data from the two instruments. The VDE algorithm is found to produce more accurate estimates of CBH for both instruments and can cope with heavy aerosol loading conditions well. By contrast, CL51 MAN overestimates CBH by 400 m and misses many low level clouds under such conditions. These findings are important given that CL51 has been adopted operationally by many meteorological stations in China.

  14. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  15. Effect of freezing method and frozen storage duration on instrumental quality of lamb throughout display.

    Science.gov (United States)

    Muela, E; Sañudo, C; Campo, M M; Medel, I; Beltrán, J A

    2010-04-01

    This study evaluated the effect of freezing method (FM) (air blast freezer, freezing tunnel, or nitrogen chamber) and frozen storage duration (FSD) (1, 3, or 6 months) on the instrumental measurements of quality of thawed lamb, aged for a total of 72 h, throughout a 10-d display period, compared to the quality of fresh meat. pH, colour, lipid oxidation, thawing, and cooking losses in Longissimus thoracis and lumborum muscle, were determined following standard methods. FM affected yellowness, FSD redness and thawing losses, and both affected oxidation (increased as freezing rate decreased and/or as storage duration increased). When compared with fresh meat, the main differences appeared on oxidation (where a significant interaction between treatment (3FM x 3FSD + fresh meat) with display duration was detected), and on total losses (thaw + cook losses). Oxidation was lower in fresh meat, but values were not significantly different from those stored frozen for 1 month. Fresh meat had smaller total losses than did thawed meat, but losses were not significantly different from meat frozen in the freezing tunnel and stored frozen for 1 month. Display duration had a greater effect on instrumental quality parameters than did FM or FSD. pH, b*, and oxidation increased, and L* and a* decreased with an increase in the number of days on display. In conclusion, neither freezing method nor frozen storage up to 6 months influenced extensively the properties of lamb when instrumental measurements of quality were measured in meat that had been displayed for 1d after thawing. The small deterioration shown in this study should not give consumers concerns about frozen meat. 2009 Elsevier Ltd. All rights reserved.

  16. A novel single-step, multipoint calibration method for instrumented Lab-on-Chip systems

    DEFF Research Database (Denmark)

    Pfreundt, Andrea; Patou, François; Zulfiqar, Azeem

    2014-01-01

    for instrument-based PoC blood biomarker analysis systems. Motivated by the complexity of associating high-accuracy biosensing using silicon nanowire field effect transistors with ease of use for the PoC system user, we propose a novel one-step, multipoint calibration method for LoC-based systems. Our approach...... specifically addresses the important interfaces between a novel microfluidic unit to integrate the sensor array and a mobile-device hardware accessory. A multi-point calibration curve is obtained by generating a defined set of reference concentrations from a single input. By consecutively splitting the flow...

  17. Application of instrumental neutron activation analysis and multivariate statistical methods to archaeological Syrian ceramics

    International Nuclear Information System (INIS)

    Bakraji, E. H.; Othman, I.; Sarhil, A.; Al-Somel, N.

    2002-01-01

    Instrumental neutron activation analysis (INAA) has been utilized in the analysis of thirty-seven archaeological ceramics fragment samples collected from Tal AI-Wardiate site, Missiaf town, Hamma city, Syria. 36 chemical elements were determined. These elemental concentrations have been processed using two multivariate statistical methods, cluster and factor analysis in order to determine similarities and correlation between the various samples. Factor analysis confirms that samples were correctly classified by cluster analysis. The results showed that samples can be considered to be manufactured using three different sources of raw material. (author)

  18. Trace elements in cigarette tobacco by a method of instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Noordin Ibrahim

    1986-01-01

    A total of ten cigarette brands were investigated for determining the trace elemental concentrations in tobacco so as to assess their role in the induction of related diseases through smoking. A method instrumental Neutron Activation analysis was employed due to high sensitivity, speed and ability to analyse sample for a wide spectrum of elements simultaneously. A total of 18 elements were detected of which the majority are toxic elements. A full result and conclusion will be reported in the forthcoming paper. (A.J.)

  19. A Method of Separation Assurance for Instrument Flight Procedures at Non-Radar Airports

    Science.gov (United States)

    Conway, Sheila R.; Consiglio, Maria

    2002-01-01

    A method to provide automated air traffic separation assurance services during approach to or departure from a non-radar, non-towered airport environment is described. The method is constrained by provision of these services without radical changes or ambitious investments in current ground-based technologies. The proposed procedures are designed to grant access to a large number of airfields that currently have no or very limited access under Instrument Flight Rules (IFR), thus increasing mobility with minimal infrastructure investment. This paper primarily addresses a low-cost option for airport and instrument approach infrastructure, but is designed to be an architecture from which a more efficient, albeit more complex, system may be developed. A functional description of the capabilities in the current NAS infrastructure is provided. Automated terminal operations and procedures are introduced. Rules of engagement and the operations are defined. Results of preliminary simulation testing are presented. Finally, application of the method to more terminal-like operations, and major research areas, including necessary piloted studies, are discussed.

  20. Instrumental neutron activation analysis as a routine method for rock analysis

    International Nuclear Information System (INIS)

    Rosenberg, R.J.

    1977-06-01

    Instrumental neutron activation methods for the analysis of geological samples have been developed. Special emphasis has been laid on the improvement of sensitivity and accuracy in order to maximize tha quality of the analyses. Furthermore, the procedures have been automated as far as possible in order to minimize the cost of the analysis. A short review of the basic literature is given followed by a description of the principles of the method. All aspects concerning the sensitivity are discussed thoroughly in view of the analyst's possibility of influencing them. Experimentally determined detection limits for Na, Al, K, Ca, Sc, Cr, Ti, V, Mn, Fe, Ni, Co, Rb, Zr, Sb, Cs, Ba, La, Ce, Nd, Sm, Eu, Gd, Tb, Dy, Yb, Lu, Hf, Ta, Th and U are given. The errors of the method are discussed followed by actions taken to avoid them. The most significant error was caused by flux deviation, but this was avoided by building a rotating sample holder for rotating the samples during irradiation. A scheme for the INAA of 32 elements is proposed. The method has been automated as far as possible and an automatic γ-spectrometer and a computer program for the automatic calculation of the results are described. Furthermore, a completely automated uranium analyzer based on delayed neutron counting is described. The methods are discussed in view of their applicability to rock analysis. It is stated that the sensitivity varies considerably from element to element and instrumental activation analysis is an excellent method for the analysis of some specific elements like lanthanides, thorium and uranium but less so for many other elements. The accuracy is good varying from 2% to 10% for most elements. Instrumental activation analysis for most elements is rather an expensive method there being, however, a few exceptions. The most important of these is uranium. The analysis of uranium by delayed neutron counting is an inexpensive means for the analysis of large numbers of samples needed for

  1. Design Method of Active Disturbance Rejection Variable Structure Control System

    Directory of Open Access Journals (Sweden)

    Yun-jie Wu

    2015-01-01

    Full Text Available Based on lines cluster approaching theory and inspired by the traditional exponent reaching law method, a new control method, lines cluster approaching mode control (LCAMC method, is designed to improve the parameter simplicity and structure optimization of the control system. The design guidelines and mathematical proofs are also given. To further improve the tracking performance and the inhibition of the white noise, connect the active disturbance rejection control (ADRC method with the LCAMC method and create the extended state observer based lines cluster approaching mode control (ESO-LCAMC method. Taking traditional servo control system as example, two control schemes are constructed and two kinds of comparison are carried out. Computer simulation results show that LCAMC method, having better tracking performance than the traditional sliding mode control (SMC system, makes the servo system track command signal quickly and accurately in spite of the persistent equivalent disturbances and ESO-LCAMC method further reduces the tracking error and filters the white noise added on the system states. Simulation results verify the robust property and comprehensive performance of control schemes.

  2. Variable-mesh method of solving differential equations

    Science.gov (United States)

    Van Wyk, R.

    1969-01-01

    Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.

  3. A method for retrieving endodontic or atypical nonendodontic separated instruments from the root canal: a report of two cases.

    Science.gov (United States)

    Monteiro, Jardel Camilo do Carmo; Kuga, Milton Carlos; Dantas, Andrea Abi Rached; Jordão-Basso, Keren Cristina Fagundes; Keine, Katia Cristina; Ruchaya, Prashant Jay; Faria, Gisele; Leonardo, Renato de Toledo

    2014-11-01

    This clinical report presents a new method for retrieving separated instruments from the root canal with minimally invasive procedures. The presence of separated instrument in root canal may interfere in the endodontic treatment prognosis. There are several recommended methods to retrieve separated instruments, but some are difficult in clinically practice. This study describes two cases of separated instrument removal from the root canal using a stainless-steel prepared needle associated with a K-file. Case 1 presented a fractured gutta-percha condenser within the mandibular second premolar, it was separated during incorrect intracanal medication calcium hydroxide placement. Case 2 had a fractured sewing needle within the upper central incisor that the patient used to remove food debris from the root canal. After cervical preparation, the fractured instruments were fitted inside a prepared needle and then an endodontic instrument (#25 K-file) was adapted with clockwise turning motion between the needle inner wall and the fragment. The endodontic or atypical nonendodontic separated instrument may be easily pull on of the root canal using a single and low cost device. The methods for retrieving separated instruments from root canal are difficult and destructive procedures. The present case describes a simple method to solve this problem.

  4. Variable discrete ordinates method for radiation transfer in plane-parallel semi-transparent media with variable refractive index

    Science.gov (United States)

    Sarvari, S. M. Hosseini

    2017-09-01

    The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.

  5. A method of estimating GPS instrumental biases with a convolution algorithm

    Science.gov (United States)

    Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua

    2018-03-01

    This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.

  6. Apparatus and method for variable angle slant hole collimator

    Science.gov (United States)

    Lee, Seung Joon; Kross, Brian J.; McKisson, John E.

    2017-07-18

    A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.

  7. Combustion engine variable compression ratio apparatus and method

    Science.gov (United States)

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  8. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  9. The potential of soft computing methods in NPP instrumentation and control

    International Nuclear Information System (INIS)

    Hampel, R.; Chaker, N.; Kaestner, W.; Traichel, A.; Wagenknecht, M.; Gocht, U.

    2002-01-01

    The method of signal processing by soft computing include the application of fuzzy logic, synthetic neural networks, and evolutionary algorithms. The article contains an outline of the objectives and results of the application of fuzzy logic and methods of synthetic neural networks in nuclear measurement and control. The special requirements to be met by the software in safety-related areas with respect to reliability, evaluation, and validation are described. Possible uses may be in off-line applications in modeling, simulation, and reliability analysis as well as in on-line applications (real-time systems) for instrumentation and control. Safety-related aspects of signal processing are described and analyzed for the fuzzy logic and synthetic neural network concepts. Application are covered in selected examples. (orig.)

  10. Comparison of methods and instruments for 222Rn/220Rn progeny measurement

    International Nuclear Information System (INIS)

    Liu Yanyang; Shang Bing; Wu Yunyun; Zhou Qingzhi

    2012-01-01

    In this paper, comparisons were made among three methods of measurement (grab measurement, continuous measurement and integrating measurement) and also measurement of different instruments for a radon/thoron mixed chamber. Taking the optimized five-segment method as a comparison criterion, for the equilibrium-equivalent concentration of 222 Rn, measured results of Balm and 24 h integrating detectors are 31% and 29% higher than the criterion, the results of Wl x, however, is 20% lower; and for 220 Rn progeny, the results of Fiji-142, Kf-602D, BWLM and 24 h integrating detector are 86%, 18%, 28% and 36% higher than the criterion respectively, except that of WLx, which is 5% lower. For the differences shown, further research is needed. (authors)

  11. Comparison of neutron activation analysis with other instrumental methods for elemental analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    Regge, P. de; Lievens, F.; Delespaul, I.; Monsecour, M.

    1976-01-01

    A comparison of instrumental methods, including neutron activation analysis, X-ray fluorescence spectrometry, atomic absorption spectrometry and emission spectrometry, for the analysis of heavy metals in airborne particulate matter is described. The merits and drawbacks of each method for the routine analysis of a large number of samples are discussed. The sample preparation technique, calibration and statistical data relevant to each method are given. Concordant results are obtained by the different methods for Co, Cu, Ni, Pb and Zn. Less good agreement is obtained for Fe, Mn and V. The results are not in agreement for the elements Cd and Cr. Using data obtained on the dust sample distributed by Euratom-ISPRA within the framework of an interlaboratory comparison, the accuracy of each method for the various elements is estimated. Neutron activation analysis was found to be the most sensitive and accurate of the non-destructive analysis methods. Only atomic absorption spectrometry has a comparable sensitivity, but requires considerable preparation work. X-ray fluorescence spectrometry is less sensitive and shows biases for Cr and V. Automatic emission spectrometry with simultaneous measurement of the beam intensities by photomultipliers is the fastest and most economical technique, though at the expense of some precision and sensitivity. (author)

  12. Fast analytical method for the addition of random variables

    International Nuclear Information System (INIS)

    Senna, V.; Milidiu, R.L.; Fleming, P.V.; Salles, M.R.; Oliveria, L.F.S.

    1983-01-01

    Using the minimal cut sets representation of a fault tree, a new approach to the method of moments is proposed in order to estimate confidence bounds to the top event probability. The method utilizes two or three moments either to fit a distribution (the normal and lognormal families) or to evaluate bounds from standard inequalities (e.g. Markov, Tchebycheff, etc.) Examples indicate that the results obtained by the log-normal family are in good agreement with those obtained by Monte Carlo simulation

  13. Quantifying temporal glucose variability in diabetes via continuous glucose monitoring: mathematical methods and clinical application.

    Science.gov (United States)

    Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony

    2005-12-01

    Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the

  14. A Latent Variable Clustering Method for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir

    2016-01-01

    In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...

  15. Effect of corruption on healthcare satisfaction in post-soviet nations: A cross-country instrumental variable analysis of twelve countries.

    Science.gov (United States)

    Habibov, Nazim

    2016-03-01

    There is the lack of consensus about the effect of corruption on healthcare satisfaction in transitional countries. Interpreting the burgeoning literature on this topic has proven difficult due to reverse causality and omitted variable bias. In this study, the effect of corruption on healthcare satisfaction is investigated in a set of 12 Post-Socialist countries using instrumental variable regression on the sample of 2010 Life in Transition survey (N = 8655). The results indicate that experiencing corruption significantly reduces healthcare satisfaction. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  16. Variational method for objective analysis of scalar variable and its ...

    Indian Academy of Sciences (India)

    e-mail: sinha@tropmet.res.in. In this study real time data have been used to compare the standard and triangle method by ... The work presented in this paper is about a vari- ... But when the balance is needed ..... tred at 17:30h IST of 11 June within half a degree of ..... Ogura Y and Chen Y L 1977 A life history of an intense.

  17. Detailed characterizations of a Comparative Reactivity Method (CRM) instrument: experiments vs. modelling

    Science.gov (United States)

    Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.

    2015-04-01

    The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a

  18. Using method triangulation to validate a new instrument (CPWQ-com) assessing cancer patients' satisfaction with communication

    DEFF Research Database (Denmark)

    Ross, Lone; Lundstrøm, Louise Hyldborg; Petersen, Morten Aagaard

    2012-01-01

    Patients' perceptions of care including the communication with health care staff is recognized as an important aspect of the quality of cancer care. Using mixed methods, we developed and validated a short instrument assessing this communication.......Patients' perceptions of care including the communication with health care staff is recognized as an important aspect of the quality of cancer care. Using mixed methods, we developed and validated a short instrument assessing this communication....

  19. Methods and instrumentation for investigating Hall sensors during their irradiation in nuclear research reactors

    International Nuclear Information System (INIS)

    Bolshakova, I.; Holyaka, R.; Makido, E.; Marusenkov, A.; Shurygin, F.; Yerashok, V.; Moreau, P. J.; Vayakis, G.; Duran, I.; Stockel, J.; Chekanov, V.; Konopleva, R.; Nazarkin, I.; Kulikov, S.; Leroy, C.

    2009-01-01

    Present work discusses the issues of creating the instrumentation for testing the semiconductor magnetic field sensors during their irradiation with neutrons in nuclear reactors up to fluences similar to neutron fluences in steady-state sensor locations in ITER. The novelty of the work consists in Hall sensor parameters being investigated: first, directly during the irradiation (in real time), and, second, at high irradiation levels (fast neutron fluence > 10 18 n/cm 2 ). Developed instrumentation has been successfully tested and applied in the research experiments on radiation stability of magnetic sensors in IBR-2 (JINR, Dubna) and VVR-M (PNPI, Saint-Petersburg) reactors. The 'Remote-Rad' bench consists of 2 heads (head 1 and head 2) bearing investigated sensors put in a ceramic setting, of electronic unit, of personal computer and of signal lines. Each head contains 6 Hall sensors and a coil for generating test magnetic field. Moreover head 1 contains thermocouples for temperature measurement while the temperature of head 2 is measured by thermo-resistive method. The heads are placed in the reactor channel

  20. Optimal management strategies in variable environments: Stochastic optimal control methods

    Science.gov (United States)

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  1. Methods for Analyzing Electric Load Shape and its Variability

    Energy Technology Data Exchange (ETDEWEB)

    Price, Philip

    2010-05-12

    Current methods of summarizing and analyzing electric load shape are discussed briefly and compared. Simple rules of thumb for graphical display of load shapes are suggested. We propose a set of parameters that quantitatively describe the load shape in many buildings. Using the example of a linear regression model to predict load shape from time and temperature, we show how quantities such as the load?s sensitivity to outdoor temperature, and the effectiveness of demand response (DR), can be quantified. Examples are presented using real building data.

  2. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  3. MODERN INSTRUMENTAL METHODS TO CONTROL THE SEED QUALITY IN ROOT VEGETABLES

    Directory of Open Access Journals (Sweden)

    F. B. Musaev

    2017-01-01

    Full Text Available The standard methods of analysis don’t meet all modern requirements to determine the seed a quality. These methods can’t unveil inner deficiencies that are very important to control seed viability. The capabilities of new instrumental method to analyze the seed quality of root vegetables were regarded in the article. The method of micro-focus radiography is distinguished from other existing methods by more sensitivity, rapidity and easiness to be performed. Based on practical importance the visualization of inner seed structure, it allows determining far before seed germination the degree of endosperm development and embryo; the presence of inner damages and infections, occupation and damage caused by pests. The use of micro-focus radiography enables to detect the degree of seed quality difference for some traits such as monogermity and self-fertilization that are economically valuable for breeding program in red beet. With the aid of the method the level of seed development, damage and inner deficiencies in carrot and parsnip can be revealed. In X-ray projection seeds of inbred lines of radish significantly differed from variety population ones for their underdevelopment in the inner structure. The advantage of the method is that seeds rest undamaged after quality analyzing and both can be used for further examination with the use of other methods or be sown; that is quite important for breeders, when handling with small quantity or collectable plant breeding material. The results radiography analyses can be saved and archived that enables to watch for seed qualities in dynamic; this data can be also used at possible arbitration cases. 

  4. A Contribution To The Development And Analysis Of A Combined Current-Voltage Instrument Transformer By Using Modern CAD Methods

    International Nuclear Information System (INIS)

    Chundeva-Blajer, Marija M.

    2004-01-01

    The principle aim and task of the thesis is the analysis and development of 20 kV combined current-voltage instrument transformer (CCVIT) by using modern CAD techniques. CCVIT is a complex electromagnetic system comprising of four windings and two magnetic cores in one insulation housing for simultaneous transformation of high voltages and currents to measurable signal values by standard instruments. The analytical design methods can be applied on simple electromagnetic configurations, which is not the case with the CCVIT. There is mutual electromagnetic influence between the voltage measurement core (VMC) and the current measurement core (CMC). After the analytical CCVIT design had been done, exact determination of its metrological characteristics has been accomplished by using the numerical finite element method implemented in the FEM-3D program package. The FEM-3D calculation is made in 19 cross-sectional layers of the z-axis of the CCVIT three-dimensional domain. By FEM-3D application the three-dimensional CCVIT magnetic field distribution is derived. This is the basis for calculation of the initial metrological characteristics of the CCVIT (VMC is accuracy class 3 and CMC is accuracy class 1). By using the stochastic optimization technique based on genetic algorithm the CCVIT optimal design is achieved. The objective function is the minimum of the metrological parameters (VIM voltage error and CMC current error). There are I I independent input variables during the optimization process by which the optimal project is derived. The optimal project is adapted for realization of a prototype and the optimized project is derived. Full comparative analysis of the metrological and the electromagnetic characteristics of the three projects is accomplished. By application of the program package MATLAB/SIMULINK the CCVIT transient phenomena is analyzed for different regimes in the three design projects. In the Instrument Transformer Factory of EMO A. D.-Ohrid a CCVIT

  5. The Variability and Evaluation Method of Recycled Concrete Aggregate Properties

    Directory of Open Access Journals (Sweden)

    Zhiqing Zhang

    2017-01-01

    Full Text Available With the same sources and regeneration techniques, given RA’s properties may display large variations. The same single property index of different sets maybe has a large difference of the whole property. How shall we accurately evaluate the whole property of RA? 8 groups of RAs from pavement and building were used to research the method of evaluating the holistic characteristics of RA. After testing and investigating, the parameters of aggregates were analyzed. The data of physical and mechanical properties show a distinct dispersion and instability; thus, it has been difficult to express the whole characteristics in any single property parameter. The Euclidean distance can express the similarity of samples. The closer the distance, the more similar the property. The standard variance of the whole property Euclidean distances for two types of RA is Sk=7.341 and Sk=2.208, respectively, which shows that the property of building RA has great fluctuation, while pavement RA is more stable. There are certain correlations among the apparent density, water absorption, and crushed value of RAs, and the Mahalanobis distance method can directly evaluate the whole property by using its parameters: mean, variance, and covariance, and it can provide a grade evaluation model for RAs.

  6. Quality evaluation of fish and other seafood by traditional and nondestructive instrumental methods: Advantages and limitations.

    Science.gov (United States)

    Hassoun, Abdo; Karoui, Romdhane

    2017-06-13

    Although being one of the most vulnerable and perishable products, fish and other seafoods provide a wide range of health-promoting compounds. Recently, the growing interest of consumers in food quality and safety issues has contributed to the increasing demand for sensitive and rapid analytical technologies. Several traditional physicochemical, textural, sensory, and electrical methods have been used to evaluate freshness and authentication of fish and other seafood products. Despite the importance of these standard methods, they are expensive and time-consuming, and often susceptible to large sources of variation. Recently, spectroscopic methods and other emerging techniques have shown great potential due to speed of analysis, minimal sample preparation, high repeatability, low cost, and, most of all, the fact that these techniques are noninvasive and nondestructive and, therefore, could be applied to any online monitoring system. This review describes firstly and briefly the basic principles of multivariate data analysis, followed by the most commonly traditional methods used for the determination of the freshness and authenticity of fish and other seafood products. A special focus is put on the use of rapid and nondestructive techniques (spectroscopic techniques and instrumental sensors) to address several issues related to the quality of these products. Moreover, the advantages and limitations of each technique are reviewed and some perspectives are also given.

  7. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  8. Partial differential equations with variable exponents variational methods and qualitative analysis

    CERN Document Server

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  9. Microwave measurement of electrical fields in different media – principles, methods and instrumentation

    International Nuclear Information System (INIS)

    St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" data-affiliation=" (Sofia University St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" >Dankov, Plamen I

    2014-01-01

    This paper, presented in the frame of 4th International Workshop and Summer School on Plasma Physics (IWSSPP'2010, Kiten, Bulgaria), is a brief review of the principles, methods and instrumentation of the microwave measurements of electrical fields in different media. The main part of the paper is connected with the description of the basic features of many field sensors and antennas – narrow-, broadband and ultra-wide band, miniaturized, reconfigurable and active sensors, etc. The main features and applicability of these sensors for determination of electric fields in different media is discussed. The last part of the paper presents the basic principles for utilization of electromagnetic 3-D simulators for E-field measurement purposes. Two illustrative examples have been given – the determination of the dielectric anisotropy of multi-layer materials and discussion of the selectivity of hairpin-probe for determination of the electron density in dense gaseous plasmas.

  10. [Research on fractal tones generating method for tinnitus rehabilitation based on musical instrument digital interface technology].

    Science.gov (United States)

    Wang, Lu; He, Peiyu; Pan, Fan

    2014-08-01

    Tinnitus is a subjective sensation of sound without external stimulation. It has become ubiquitous and has therefore aroused much attention in recent years. According to the survey, ameliorating tinnitus based on special music and reducing pressure have good effects on the treatment of it. Meantime, vicious cycle chains between tinnitus and bad feelings have been broken. However, tinnitus therapy has been restricted by using looping music. Therefore, a method of generating fractal tones based on musical instrument digital interface (MIDI) technology and pink noise has been proposed in this paper. The experimental results showed that the fractal fragments were self-similar, incompletely reduplicate, and no sudden changes in pitches and would have a referential significance for tinnitus therapy.

  11. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  12. Comparison of fine particle measurements from a direct-reading instrument and a gravimetric sampling method.

    Science.gov (United States)

    Kim, Jee Young; Magari, Shannon R; Herrick, Robert F; Smith, Thomas J; Christiani, David C

    2004-11-01

    Particulate air pollution, specifically the fine particle fraction (PM2.5), has been associated with increased cardiopulmonary morbidity and mortality in general population studies. Occupational exposure to fine particulate matter can exceed ambient levels by a large factor. Due to increased interest in the health effects of particulate matter, many particle sampling methods have been developed In this study, two such measurement methods were used simultaneously and compared. PM2.5 was sampled using a filter-based gravimetric sampling method and a direct-reading instrument, the TSI Inc. model 8520 DUSTTRAK aerosol monitor. Both sampling methods were used to determine the PM2.5 exposure in a group of boilermakers exposed to welding fumes and residual fuel oil ash. The geometric mean PM2.5 concentration was 0.30 mg/m3 (GSD 3.25) and 0.31 mg/m3 (GSD 2.90)from the DUSTTRAK and gravimetric method, respectively. The Spearman rank correlation coefficient for the gravimetric and DUSTTRAK PM2.5 concentrations was 0.68. Linear regression models indicated that log, DUSTTRAK PM2.5 concentrations significantly predicted loge gravimetric PM2.5 concentrations (p gravimetric PM2.5 concentrations was found to be modified by surrogate measures for seasonal variation and type of aerosol. PM2.5 measurements from the DUSTTRAK are well correlated and highly predictive of measurements from the gravimetric sampling method for the aerosols in these work environments. However, results from this study suggest that aerosol particle characteristics may affect the relationship between the gravimetric and DUSTTRAK PM2.5 measurements. Recalibration of the DUSTTRAK for the specific aerosol, as recommended by the manufacturer, may be necessary to produce valid measures of airborne particulate matter.

  13. Eddy Covariance Method for CO2 Emission Measurements: CCS Applications, Principles, Instrumentation and Software

    Science.gov (United States)

    Burba, George; Madsen, Rod; Feese, Kristin

    2013-04-01

    The Eddy Covariance method is a micrometeorological technique for direct high-speed measurements of the transport of gases, heat, and momentum between the earth's surface and the atmosphere. Gas fluxes, emission and exchange rates are carefully characterized from single-point in-situ measurements using permanent or mobile towers, or moving platforms such as automobiles, helicopters, airplanes, etc. Since the early 1990s, this technique has been widely used by micrometeorologists across the globe for quantifying CO2 emission rates from various natural, urban and agricultural ecosystems [1,2], including areas of agricultural carbon sequestration. Presently, over 600 eddy covariance stations are in operation in over 120 countries. In the last 3-5 years, advancements in instrumentation and software have reached the point when they can be effectively used outside the area of micrometeorology, and can prove valuable for geological carbon capture and sequestration, landfill emission measurements, high-precision agriculture and other non-micrometeorological industrial and regulatory applications. In the field of geological carbon capture and sequestration, the magnitude of CO2 seepage fluxes depends on a variety of factors. Emerging projects utilize eddy covariance measurement to monitor large areas where CO2 may escape from the subsurface, to detect and quantify CO2 leakage, and to assure the efficiency of CO2 geological storage [3,4,5,6,7,8]. Although Eddy Covariance is one of the most direct and defensible ways to measure and calculate turbulent fluxes, the method is mathematically complex, and requires careful setup, execution and data processing tailor-fit to a specific site and a project. With this in mind, step-by-step instructions were created to introduce a novice to the conventional Eddy Covariance technique [9], and to assist in further understanding the method through more advanced references such as graduate-level textbooks, flux networks guidelines, journals

  14. Instrumentation and methods evaluations for shallow land burial of waste materials: water erosion

    International Nuclear Information System (INIS)

    Hostetler, D.D.; Murphy, E.M.; Childs, S.W.

    1981-08-01

    The erosion of geologic materials by water at shallow-land hazardous waste disposal sites can compromise waste containment. Erosion of protective soil from these sites may enhance waste transport to the biosphere through water, air, and biologic pathways. The purpose of this study was to review current methods of evaluating soil erosion and to recommend methods for use at shallow-land, hazardous waste burial sites. The basic principles of erosion control are: minimize raindrop impact on the soil surface; minimize runoff quantity; minimize runoff velocity; and maximize the soil's resistance to erosion. Generally soil erosion can be controlled when these principles are successfully applied at waste disposal sites. However, these erosion control practices may jeopardize waste containment. Typical erosion control practices may enhance waste transport by increasing subsurface moisture movement and biologic uptake of hazardous wastes. A two part monitoring program is recommended for US Department of Energy (DOE) hazardous waste disposal sites. The monitoring programs and associated measurement methods are designed to provide baseline data permitting analysis and prediction of long term erosion hazards at disposal sites. These two monitoring programs are: (1) site reconnaissance and tracking; and (2) site instrumentation. Some potential waste transport problems arising from erosion control practices are identified. This report summarizes current literature regarding water erosion prediction and control

  15. Development of a localized probabilistic sensitivity method to determine random variable regional importance

    International Nuclear Information System (INIS)

    Millwater, Harry; Singh, Gulshan; Cortina, Miguel

    2012-01-01

    There are many methods to identify the important variable out of a set of random variables, i.e., “inter-variable” importance; however, to date there are no comparable methods to identify the “region” of importance within a random variable, i.e., “intra-variable” importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernel-based method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. - Highlights: ► Probabilistic sensitivity methodology. ► Determines the “region” of importance within random variables such as left tail, near tail, center, right tail, etc. ► Uses the Score Function approach to reuse the samples, hence, negligible cost. ► No restrictions on the random variable types or limit states.

  16. Emission quantification using the tracer gas dispersion method: The influence of instrument, tracer gas species and source simulation

    DEFF Research Database (Denmark)

    Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker

    2018-01-01

    The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment...... plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument...... precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument...

  17. A method and instruments to identify the torque, the power and the efficiency of an internal combustion engine of a wheeled vehicle

    Science.gov (United States)

    Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.

    2018-01-01

    In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.

  18. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    Science.gov (United States)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  19. Proceedings of a workshop on methods for neutron scattering instrumentation design

    International Nuclear Information System (INIS)

    Hjelm, R.P.

    1997-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop. The international gathering of about 50 participants representing 15 national facilities, universities and corporations featured oral presentations, posters, discussions and demonstrations. Participants looked at a number of issues concerning neutron scattering instruments and the tools used in instrument design. Objectives included: (1) determining the needs of the neutron scattering community in instrument design computer code and information sharing to aid future instrument development, (2) providing for a means of training scientists in neutron scattering and neutron instrument techniques, and (3) facilitating the involvement of other scientists in determining the characteristics of new instruments that meet future scientific objectives, and (4) fostering international cooperation in meeting these needs. The scope of the meeting included: (1) a review of x-ray scattering instrument design tools, (2) a look at the present status of neutron scattering instrument design tools and models of neutron optical elements, and (3) discussions of the present and future needs of the neutron scattering community. Selected papers were abstracted separately for inclusion to the Energy Science and Technology Database

  20. Proceedings of a workshop on methods for neutron scattering instrumentation design

    Energy Technology Data Exchange (ETDEWEB)

    Hjelm, R.P. [ed.] [Los Alamos National Lab., NM (United States)

    1997-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop. The international gathering of about 50 participants representing 15 national facilities, universities and corporations featured oral presentations, posters, discussions and demonstrations. Participants looked at a number of issues concerning neutron scattering instruments and the tools used in instrument design. Objectives included: (1) determining the needs of the neutron scattering community in instrument design computer code and information sharing to aid future instrument development, (2) providing for a means of training scientists in neutron scattering and neutron instrument techniques, and (3) facilitating the involvement of other scientists in determining the characteristics of new instruments that meet future scientific objectives, and (4) fostering international cooperation in meeting these needs. The scope of the meeting included: (1) a review of x-ray scattering instrument design tools, (2) a look at the present status of neutron scattering instrument design tools and models of neutron optical elements, and (3) discussions of the present and future needs of the neutron scattering community. Selected papers were abstracted separately for inclusion to the Energy Science and Technology Database.

  1. The control variable method: a fully implicit numerical method for solving conservation equations for unsteady multidimensional fluid flow

    International Nuclear Information System (INIS)

    Le Coq, G.; Boudsocq, G.; Raymond, P.

    1983-03-01

    The Control Variable Method is extended to multidimensional fluid flow transient computations. In this paper basic principles of the method are given. The method uses a fully implicit space discretization and is based on the decomposition of the momentum flux tensor into scalar, vectorial, and tensorial, terms. Finally some computations about viscous-driven flow and buoyancy-driven flow in cavity are presented

  2. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  3. System and method of modulating electrical signals using photoconductive wide bandgap semiconductors as variable resistors

    Science.gov (United States)

    Harris, John Richardson; Caporaso, George J; Sampayan, Stephen E

    2013-10-22

    A system and method for producing modulated electrical signals. The system uses a variable resistor having a photoconductive wide bandgap semiconductor material construction whose conduction response to changes in amplitude of incident radiation is substantially linear throughout a non-saturation region to enable operation in non-avalanche mode. The system also includes a modulated radiation source, such as a modulated laser, for producing amplitude-modulated radiation with which to direct upon the variable resistor and modulate its conduction response. A voltage source and an output port, are both operably connected to the variable resistor so that an electrical signal may be produced at the output port by way of the variable resistor, either generated by activation of the variable resistor or propagating through the variable resistor. In this manner, the electrical signal is modulated by the variable resistor so as to have a waveform substantially similar to the amplitude-modulated radiation.

  4. Cluster cosmological analysis with X ray instrumental observables: introduction and testing of AsPIX method

    International Nuclear Information System (INIS)

    Valotti, Andrea

    2016-01-01

    Cosmology is one of the fundamental pillars of astrophysics, as such it contains many unsolved puzzles. To investigate some of those puzzles, we analyze X-ray surveys of galaxy clusters. These surveys are possible thanks to the bremsstrahlung emission of the intra-cluster medium. The simultaneous fit of cluster counts as a function of mass and distance provides an independent measure of cosmological parameters such as Ω m , σ s , and the dark energy equation of state w0. A novel approach to cosmological analysis using galaxy cluster data, called top-down, was developed in N. Clerc et al. (2012). This top-down approach is based purely on instrumental observables that are considered in a two-dimensional X-ray color-magnitude diagram. The method self-consistently includes selection effects and scaling relationships. It also provides a means of bypassing the computation of individual cluster masses. My work presents an extension of the top-down method by introducing the apparent size of the cluster, creating a three-dimensional X-ray cluster diagram. The size of a cluster is sensitive to both the cluster mass and its angular diameter, so it must also be included in the assessment of selection effects. The performance of this new method is investigated using a Fisher analysis. In parallel, I have studied the effects of the intrinsic scatter in the cluster size scaling relation on the sample selection as well as on the obtained cosmological parameters. To validate the method, I estimate uncertainties of cosmological parameters with MCMC method Amoeba minimization routine and using two simulated XMM surveys that have an increasing level of complexity. The first simulated survey is a set of toy catalogues of 100 and 10000 deg 2 , whereas the second is a 1000 deg 2 catalogue that was generated using an Aardvark semi-analytical N-body simulation. This comparison corroborates the conclusions of the Fisher analysis. In conclusion, I find that a cluster diagram that accounts

  5. Efficient Estimation of Spectral Moments and the Polarimetric Variables on Weather Radars, Sonars, Sodars, Acoustic Flow Meters, Lidars, and Similar Active Remote Sensing Instruments

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...

  6. The relationship between glass ceiling and power distance as a cultural variable by a new method

    OpenAIRE

    Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu

    2015-01-01

    Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...

  7. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  8. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  9. Influence of different manufacturing methods on the cyclic fatigue of rotary nickel-titanium endodontic instruments.

    Science.gov (United States)

    Rodrigues, Renata C V; Lopes, Hélio P; Elias, Carlos N; Amaral, Georgiana; Vieira, Victor T L; De Martin, Alexandre S

    2011-11-01

    The aim of this study was to evaluate, by static and dynamic cyclic fatigue tests, the number of cycles to fracture (NCF) 2 types of rotary NiTi instruments: Twisted File (SybronEndo, Orange, CA), which is manufactured by a proprietary twisting process, and RaCe files (FKG Dentaire, La Chaux-de-Fonds, Switzerland), which are manufactured by grinding. Twenty Twisted Files (TFs) and 20 RaCe files #25/.006 taper instruments were allowed to rotate freely in an artificial curved canal at 310 rpm in a static or a dynamic model until fracture occurred. Measurements of the fractured fragments showed that fracture occurred at the point of maximum flexure in the midpoint of the curved segment. The NCF was significantly lower for RaCe instruments compared with TFs. The NCF was also lower for instruments subjected to the static test compared with the dynamic model in both groups. Scanning electron microscopic analysis revealed ductile morphologic characteristics on the fractured surfaces of all instruments and no plastic deformation in their helical shafts. Rotary NiTi endodontic instruments manufactured by twisting present greater resistance to cyclic fatigue compared with instruments manufactured by grinding. The fracture mode observed in all instruments was of the ductile type. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. A Comparison of Methods to Test Mediation and Other Intervening Variable Effects

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil

    2010-01-01

    A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892

  11. Propulsion and launching analysis of variable-mass rockets by analytical methods

    OpenAIRE

    D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh

    2013-01-01

    In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...

  12. Instrumental neutron activation analysis of river habitants by the k(0)-standardization method

    International Nuclear Information System (INIS)

    Momoshima, N.; Toyoshima, T.; Matsushita, R.; Fukuda, A.; Hibino, K.

    2005-01-01

    Analysis of metal concentrations in samples use reference materials for determination, which means elements out of the references are not possible to be determined. The instrumental neutron activation analysis (INAA) with k(O)-standardization method makes possible to determine metals without use of reference materials, which is very attractive for environmental sample analysis, River habitants would be available as a bio-indicator from which river water quality or metal contamination level could be evaluated. We analyzed river fishes and river insects by INAA with k(O)-standardization to examine the possibility of these habitants as a bio-indicator of water system. Small fishes, Oryzias latipes and Gambusia affinis were collected at 3 different rivers every month and river insects, families of Heptageniidae, Baetidae, Perlidae, Hydropsychidae, Psephenidae were collected at a fixed point of the river. The dried samples were irradiated at the research reactor, JRR-4 (3.5MW), JAERI for 10 min and 3 h. 17 elements (Na, K, Ca, Sc, Cr, Mn, Fe, Co, Zn, As, Se, Br, Rb, Sr, Ba, Ce and Sm) were determined by the NAA-k(0) method, showing effectiveness of the present method for environmental sample analysis. The metals observed in the fishes were the highest in Ca and the lowest in Sc, ranging from 10 5 mg/kg-dry weigh in Ca to 10 -2 mg/kg-dry weight in Sc. The differences in metal concentrations were examined by statistical analysis with t-test. Ca, Na and Br concentrations differ between species, Oryzias latipes and Gambusia, and Fe, Sc, Co, Zn and Se concentrations differ among rivers. No difference was observed on K, Rb and Sr concentrations.

  13. Injection Methods and Instrumentation for Serial X-ray Free Electron Laser Experiments

    Science.gov (United States)

    James, Daniel

    Scientists have used X-rays to study biological molecules for nearly a century. Now with the X-ray free electron laser (XFEL), new methods have been developed to advance structural biology. These new methods include serial femtosecond crystallography, single particle imaging, solution scattering, and time resolved techniques. The XFEL is characterized by high intensity pulses, which are only about 50 femtoseconds in duration. The intensity allows for scattering from microscopic particles, while the short pulses offer a way to outrun radiation damage. XFELs are powerful enough to obliterate most samples in a single pulse. While this allows for a "diffract and destroy" methodology, it also requires instrumentation that can position microscopic particles into the X-ray beam (which may also be microscopic), continuously renew the sample after each pulse, and maintain sample viability during data collection. Typically these experiments have used liquid microjets to continuously renew sample. The high flow rate associated with liquid microjets requires large amounts of sample, most of which runs to waste between pulses. An injector designed to stream a viscous gel-like material called lipidic cubic phase (LCP) was developed to address this problem. LCP, commonly used as a growth medium for membrane protein crystals, lends itself to low flow rate jetting and so reduces the amount of sample wasted significantly. This work discusses sample delivery and injection for XFEL experiments. It reviews the liquid microjet method extensively, and presents the LCP injector as a novel device for serial crystallography, including detailed protocols for the LCP injector and anti-settler operation.

  14. Clinical, laboratory and instrumental methods of pre-surgical diagnosis of the parathyroid glands cancer

    Directory of Open Access Journals (Sweden)

    Natalia G. Mokrysheva

    2017-12-01

    Full Text Available Backgraund. When defining symptomatic primary hyperparathyroidism (PHPT, differential diagnosis between a benign and malignant neoplasm of parathyroid glands (PG may be challenging. The diagnosis of carcinoma or a benign tumor determines the extent of the surgical intervention and further observation tactics. Aims. The purpose of the study is to determine the clinical and laboratory and instrumental predictors of PG cancer. Materials and methods. A retrospective study included 385 patients with PHPT (273 with adenomas of the PG, 66 with hyperplasia, and 19 patients with cancer of the PG, who had been examined and operated from 2000 to 2014. The primary goal of the study was to define the level of ionized calcium (Ca++, parathyroid hormone (PTH, and the volume of the tumor PG specific for cancer of the PG. The level of parathyroid hormone (PTH was determined by electrochemoluminescent method on the Roche analyzer Cobas 6000; ionized calcium (Ca++ ion-selective method. The size of the PG was determined by the ellipse formula: V(cm3 = (A × B × C × 0.49 by ultrasound investigation using the Valuson E8 device from General Electric. Results. The group of patients with PG carcinoma showed the increased level of Ca++ of more than 1.60 mmol/l (p = 0.004 and increased level of PTH of more than 600 pg/ml (p = 0.03. The size of tumors of more than 6 cm3 is more typical to malignant neoplasm compared to the adenoma of the PG (p = 0.01. Conclusions. The group of patients with PHPT that are at risk of having PG carcinoma include individuals that have a combination of the following indicators: PTH levels of more than 600 pg/ml, an increase in ionized calcium of more than 1.60 mmol/l, the tumor size of more than 6 cm3.

  15. A damage detection method for instrumented civil structures using prerecorded Green’s functions and cross-correlation

    OpenAIRE

    Heckman, Vanessa; Kohler, Monica; Heaton, Thomas

    2011-01-01

    Automated damage detection methods have application to instrumented structures that are susceptible to types of damage that are difficult or costly to detect. The presented method has application to the detection of brittle fracture of welded beam-column connections in steel moment-resisting frames (MRFs), where locations of potential structural damage are known a priori. The method makes use of a prerecorded catalog of Green’s function templates and a cross-correlation method ...

  16. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  17. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  18. P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length

    DEFF Research Database (Denmark)

    Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny

    2014-01-01

    Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...

  19. Assessing learning outcomes in middle-division classical mechanics: The Colorado Classical Mechanics and Math Methods Instrument

    Science.gov (United States)

    Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.

    2017-06-01

    Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.

  20. Assessing learning outcomes in middle-division classical mechanics: The Colorado Classical Mechanics and Math Methods Instrument

    Directory of Open Access Journals (Sweden)

    Marcos D. Caballero

    2017-04-01

    Full Text Available Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1 at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.

  1. Trace elements determination in silicon and ferrosilicon reference materials by instrumental neutron activation analysis method

    International Nuclear Information System (INIS)

    Moreira, Edson Goncalves; Vasconcellos, Marina Beatriz Agostini; Saiki, Mitiko; Iamashita, Celia Omine

    2002-01-01

    The use of certified reference materials, CRM, is of uppermost importance in the rastreability realization of the measurement process. At times, CRM use is restricted by the non existence of a suitable CRM with similarity to the sample in respect to matrix composition or with element levels in different orders of magnitude. IPT Chemical Division launched a project to prepare a metallic silicon CRM, due to the requirements of the industries in this field. To characterize this new CRM, IPEN Nuclear Reactor Center is able to perform instrumental neutron activation analysis, INAA, a very suitable method for silicon matrix samples because they produce basically the short lived radionuclide 3 1 Si under thermal neutrons flux, which after radioactive decay, does not interfere in the determination of other elements. In this paper, it is presented the determination of As, Br, Co, Cr, K, Eu, Fe, La, Mn, Na Nb, Sb, Sm, Sc, Th, Tb, U, V, W and Yb in silicon CRM NBS SRM 57; ferrosilicon CRM IPT 56; IPT 70; NBS SRM 58a; NBS SRM 59a and silicon RM under preparation IPT 132. From the results, the accuracy and the precision of the process were assessed. (author)

  2. Waste minimization methods for treating analytical instrumentation effluents at the source

    International Nuclear Information System (INIS)

    Ritter, J.A.; Barnhart, C.

    1995-01-01

    The primary goal of this project was to reduce the amount of hazardous waste being generated by the Savannah River Siste Defense Waste Processing Technology-analytical Laboratory (DWPT-AL). A detailed characterization study was performed on 12 of the liquid effluent streams generated within the DWPT-AL. Two of the streams were not hazardous, and are now being collected separately from the 10 hazardous streams. A secondary goal of the project was to develop in-line methods using primarily adsorption/ion exchange columns to treat liquid effluent as it emerges from the analytical instrument as a slow, dripping flow. Samples from the 10 hazardous streams were treated by adsorption in an experimental apparatus that resembled an in-line or at source column apparatus. The layered adsorbent bed contained activated carbon and ion exchange resin. The column technique did not work on the first three samples of the spectroscopy waste stream, but worked well on the next three samples which were treated in a different column. It was determined that an unusual form of mercury was present in the first three samples. Similarly, two samples of a combined waste stream were rendered nonhazardous, but the last two samples contained acetylnitrile that prevented analysis. The characteristics of these streams changed from the initial characterization study; therefore, continual, in-deptch stream characterization is the key to making this project successful

  3. The alignment of the LHC low beta triplets. Review of instrumentation and methods

    International Nuclear Information System (INIS)

    Coosemans, W.; Mainaud Durand, H.; Marin, A.; Quesnel, J-P.

    2003-01-01

    Alignment tolerances for the LHC insertions are particularly stringent regarding the low beta quadrupoles, which induce strict positioning tolerances, in a severe environment (high radiation fluxes and magnetic fields): positioning of one inner triplet with respect to the other (left/right side): ±0.5 mm (3σ), stability of the positioning of one quadrupole inside its triplet: a few microns. We propose to continuously monitor the relative position of the quadrupoles of one inner triplet with respect to a reference frame materialized by a wire and a water surface, and to use common references to link a triplet on one side to the triplet on the other side of the experiment. When the offset between real and reference position becomes too great, the quadrupole will be moved using remote motorized jacks. Instrumentation (HLS, WPS, radial measuring system, etc.) and methods will be detailed as well as the first results obtained on a cryo-magnet prototype named TAP used as test facility. The TAP is equipped with HLS linked by two types of hydraulic networks (two pipes with air and water separated, one pipe half filled), WPS and one inclinometer. It is installed on three polyurethane motorized jacks in order to study and compare servo positioning using the different sensors. (author)

  4. An effective method for finding special solutions of nonlinear differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Qin Maochang; Fan Guihong

    2008-01-01

    There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method

  5. Comparison of different calibration methods suited for calibration problems with many variables

    DEFF Research Database (Denmark)

    Holst, Helle

    1992-01-01

    This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...

  6. Using traditional methods and indigenous technologies for coping with climate variability

    NARCIS (Netherlands)

    Stigter, C.J.; Zheng Dawei,; Onyewotu, L.O.Z.; Mei Xurong,

    2005-01-01

    In agrometeorology and management of meteorology related natural resources, many traditional methods and indigenous technologies are still in use or being revived for managing low external inputs sustainable agriculture (LEISA) under conditions of climate variability. This paper starts with the

  7. Improved method for solving the neutron transport problem by discretization of space and energy variables

    International Nuclear Information System (INIS)

    Bosevski, T.

    1971-01-01

    The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results

  8. Use of the mathematical modelling method for the investigation of dynamic characteristics of acoustical measuring instruments

    Science.gov (United States)

    Vasilyev, Y. M.; Lagunov, L. F.

    1973-01-01

    The schematic diagram of a noise measuring device is presented that uses pulse expansion modeling according to the peak or any other measured values, to obtain instrument readings at a very low noise error.

  9. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  10. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  11. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  12. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  13. A comprehensive review of sensors and instrumentation methods in devices for musical expression.

    Science.gov (United States)

    Medeiros, Carolina Brum; Wanderley, Marcelo M

    2014-07-25

    Digital Musical Instruments (DMIs) are musical instruments typically composed of a control surface where user interaction is measured by sensors whose values are mapped to sound synthesis algorithms. These instruments have gained interest among skilled musicians and performers in the last decades leading to artistic practices including musical performance, interactive installations and dance. The creation of DMIs typically involves several areas, among them: arts, design and engineering. The balance between these areas is an essential task in DMI design so that the resulting instruments are aesthetically appealing, robust, and allow responsive, accurate and repeatable sensing. In this paper, we review the use of sensors in the DMI community as manifested in the proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2009-2013). Focusing on the sensor technologies and signal conditioning techniques used by the NIME community. Although it has been claimed that specifications for artistic tools are harder than those for military applications, this study raises a paradox showing that in most of the cases, DMIs are based on a few basic sensors types and unsophisticated engineering solutions, not taking advantage of more advanced sensing, instrumentation and signal processing techniques that could dramatically improve their response. We aim to raise awareness of limitations of any engineering solution and to assert the benefits of advanced electronics instrumentation design in DMIs. For this, we propose the use of specialized sensors such as strain gages, advanced conditioning circuits and signal processing tools such as sensor fusion. We believe that careful electronic instrumentation design may lead to more responsive instruments.

  14. A Comprehensive Review of Sensors and Instrumentation Methods in Devices for Musical Expression

    Directory of Open Access Journals (Sweden)

    Carolina Brum Medeiros

    2014-07-01

    Full Text Available Digital Musical Instruments (DMIs are musical instruments typically composed of a control surface where user interaction is measured by sensors whose values are mapped to sound synthesis algorithms. These instruments have gained interest among skilled musicians and performers in the last decades leading to artistic practices including musical performance, interactive installations and dance. The creation of DMIs typically involves several areas, among them: arts, design and engineering. The balance between these areas is an essential task in DMI design so that the resulting instruments are aesthetically appealing, robust, and allow responsive, accurate and repeatable sensing. In this paper, we review the use of sensors in the DMI community as manifested in the proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2009–2013. Focusing on the sensor technologies and signal conditioning techniques used by the NIME community. Although it has been claimed that specifications for artistic tools are harder than those for military applications, this study raises a paradox showing that in most of the cases, DMIs are based on a few basic sensors types and unsophisticated engineering solutions, not taking advantage of more advanced sensing, instrumentation and signal processing techniques that could dramatically improve their response. We aim to raise awareness of limitations of any engineering solution and to assert the benefits of advanced electronics instrumentation design in DMIs. For this, we propose the use of specialized sensors such as strain gages, advanced conditioning circuits and signal processing tools such as sensor fusion. We believe that careful electronic instrumentation design may lead to more responsive instruments.

  15. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  16. The instruments in the first psychological laboratory in Mexico: antecedents, influence, and methods.

    Science.gov (United States)

    Escobar, Rogelio

    2014-11-01

    Enrique O. Aragón established the first psychological laboratory in Mexico in 1916. This laboratory was inspired by Wundt's laboratory and by those created afterward in Germany and the United States. It was equipped with state-of-the art instruments imported from Germany in 1902 from Ernst Zimmermann who supplied instruments for Wundt's laboratory. Although previous authors have described the social events leading to the creation of the laboratory, there are limited descriptions of the instruments, their use, and their influence. With the aid of archival resources, the initial location of the laboratory was determined. The analysis of instruments revealed a previously overlooked relation with a previous laboratory of experimental physiology. The influence of the laboratory was traced by describing the careers of 4 students, 3 of them women, who worked with the instruments during the first 2 decades of the 20th century, each becoming accomplished scholars. In addition, this article, by identifying and analyzing the instruments shown in photographs of the psychological laboratory and in 1 motion film, provides information of the class demonstrations and the experiments conducted in this laboratory.

  17. A generalized fractional sub-equation method for fractional differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Tang, Bo; He, Yinnian; Wei, Leilei; Zhang, Xindong

    2012-01-01

    In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.

  18. The relationship between venture capital investment and macro economic variables via statistical computation method

    Science.gov (United States)

    Aygunes, Gunes

    2017-07-01

    The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.

  19. Assessing variable rate nitrogen fertilizer strategies within an extensively instrument field site using the MicroBasin model

    Science.gov (United States)

    Ward, N. K.; Maureira, F.; Yourek, M. A.; Brooks, E. S.; Stockle, C. O.

    2014-12-01

    The current use of synthetic nitrogen fertilizers in agriculture has many negative environmental and economic costs, necessitating improved nitrogen management. In the highly heterogeneous landscape of the Palouse region in eastern Washington and northern Idaho, crop nitrogen needs vary widely within a field. Site-specific nitrogen management is a promising strategy to reduce excess nitrogen lost to the environment while maintaining current yields by matching crop needs with inputs. This study used in-situ hydrologic, nutrient, and crop yield data from a heavily instrumented field site in the high precipitation zone of the wheat-producing Palouse region to assess the performance of the MicroBasin model. MicroBasin is a high-resolution watershed-scale ecohydrologic model with nutrient cycling and cropping algorithms based on the CropSyst model. Detailed soil mapping conducted at the site was used to parameterize the model and the model outputs were evaluated with observed measurements. The calibrated MicroBasin model was then used to evaluate the impact of various nitrogen management strategies on crop yield and nitrate losses. The strategies include uniform application as well as delineating the field into multiple zones of varying nitrogen fertilizer rates to optimize nitrogen use efficiency. We present how coupled modeling and in-situ data sets can inform agricultural management and policy to encourage improved nitrogen management.

  20. Data Quality Control: Challenges, Methods, and Solutions from an Eco-Hydrologic Instrumentation Network

    Science.gov (United States)

    Eiriksson, D.; Jones, A. S.; Horsburgh, J. S.; Cox, C.; Dastrup, D.

    2017-12-01

    Over the past few decades, advances in electronic dataloggers and in situ sensor technology have revolutionized our ability to monitor air, soil, and water to address questions in the environmental sciences. The increased spatial and temporal resolution of in situ data is alluring. However, an often overlooked aspect of these advances are the challenges data managers and technicians face in performing quality control on millions of data points collected every year. While there is general agreement that high quantities of data offer little value unless the data are of high quality, it is commonly understood that despite efforts toward quality assurance, environmental data collection occasionally goes wrong. After identifying erroneous data, data managers and technicians must determine whether to flag, delete, leave unaltered, or retroactively correct suspect data. While individual instrumentation networks often develop their own QA/QC procedures, there is a scarcity of consensus and literature regarding specific solutions and methods for correcting data. This may be because back correction efforts are time consuming, so suspect data are often simply abandoned. Correction techniques are also rarely reported in the literature, likely because corrections are often performed by technicians rather than the researchers who write the scientific papers. Details of correction procedures are often glossed over as a minor component of data collection and processing. To help address this disconnect, we present case studies of quality control challenges, solutions, and lessons learned from a large scale, multi-watershed environmental observatory in Northern Utah that monitors Gradients Along Mountain to Urban Transitions (GAMUT). The GAMUT network consists of over 40 individual climate, water quality, and storm drain monitoring stations that have collected more than 200 million unique data points in four years of operation. In all of our examples, we emphasize that scientists

  1. The Signal Validation method of Digital Process Instrumentation System on signal conditioner for SMART

    International Nuclear Information System (INIS)

    Moon, Hee Gun; Park, Sang Min; Kim, Jung Seon; Shon, Chang Ho; Park, Heui Youn; Koo, In Soo

    2005-01-01

    The function of PIS(Process Instrumentation System) for SMART is to acquire the process data from sensor or transmitter. The PIS consists of signal conditioner, A/D converter, DSP(Digital Signal Process) and NIC(Network Interface Card). So, It is fully digital system after A/D converter. The PI cabinet and PDAS(Plant Data Acquisition System) in commercial plant is responsible for data acquisition of the sensor or transmitter include RTD, TC, level, flow, pressure and so on. The PDAS has the software that processes each sensor data and PI cabinet has the signal conditioner, which is need for maintenance and test. The signal conditioner has the potentiometer to adjust the span and zero for test and maintenance. The PIS of SMART also has the signal conditioner which has the span and zero adjust same as the commercial plant because the signal conditioner perform the signal condition for AD converter such as 0∼10Vdc. But, To adjust span and zero is manual test and calibration. So, This paper presents the method of signal validation and calibration, which is used by digital feature in SMART. There are I/E(current to voltage), R/E(resistor to voltage), F/E(frequency to voltage), V/V(voltage to voltage). Etc. In this paper show only the signal validation and calibration about I/E converter that convert level, pressure, flow such as 4∼20mA into signal for AD conversion such as 0∼10Vdc

  2. Current status of the Essential Variables as an instrument to assess the Earth Observation Networks in Europe

    Science.gov (United States)

    Blonda, Palma; Maso, Joan; Bombelli, Antonio; Plag, Hans Peter; McCallum, Ian; Serral, Ivette; Nativi, Stefano Stefano

    2016-04-01

    ConnectinGEO (Coordinating an Observation Network of Networks EnCompassing saTellite and IN-situ to fill the Gaps in European Observations" is an H2020 Coordination and Support Action with the primary goal of linking existing Earth Observation networks with science and technology (S&T) communities, the industry sector, the Group on Earth Observations (GEO), and Copernicus. The project will end in February 2017. Essential Variables (EVs) are defined by ConnectinGEO as "a minimal set of variables that determine the system's state and developments, are crucial for predicting system developments, and allow us to define metrics that measure the trajectory of the system". . Specific application-dependent characteristics, such as spatial and temporal resolution of observations and data quality thresholds, are not generally included in the EV definition. This definition and the present status of EV developments in different societal benefit areas was elaborated at the ConnectinGEO workshop "Towards a sustainability process for GEOSS Essential Variables (EVs)," which was held in Bari on June 11-12, 2015 (http://www.gstss.org/2015_Bari/). Presentations and reports contributed by a wide range of communities provided important inputs from different sectors for assessing the status of the EV development. In most thematic areas, the development of sets of EVs is a community process leading to an agreement on what is essential for the goals of the community. While there are many differences across the communities in the details of the criteria, methodologies and processes used to develop sets of EVs, there is also a considerable common core across the communities, particularly those with a more advanced discussion. In particular, there is some level of overlap in different topics (e.g., Climate and Water), and there is a potential to develop an integrated set of EVs common to several thematic areas as well as specific ones that satisfy only one community. The thematic areas with

  3. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  4. Development of a quality instrument for assessing the spontaneous reports of ADR/ADE using Delphi method in China.

    Science.gov (United States)

    Chen, Lixun; Jiang, Ling; Shen, Aizong; Wei, Wei

    2016-09-01

    The frequently low quality of submitted spontaneous reports is of an increasing concern; to our knowledge, no validated instrument exists for assessing case reports' quality comprehensively enough. This work was conducted to develop such a quality instrument for assessing the spontaneous reports of adverse drug reaction (ADR)/adverse drug event (ADE) in China. Initial evaluation indicators were generated using systematic and literature data analysis. Final indicators and their weights were identified using Delphi method. The final quality instrument was developed by adopting the synthetic scoring method. A consensus was reached after four rounds of Delphi survey. The developed quality instrument consisted of 6 first-rank indicators, 18 second-rank indicators, and 115 third-rank indicators, and each rank indicator has been weighted. It evaluates the quality of spontaneous reports of ADR/ADE comprehensively and quantitatively on six parameters: authenticity, duplication, regulatory, completeness, vigilance level, and reporting time frame. The developed instrument was tested with good reliability and validity, which can be used to comprehensively and quantitatively assess the submitted spontaneous reports of ADR/ADE in China.

  5. A stochastic Galerkin method for the Euler equations with Roe variable transformation

    KAUST Repository

    Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan

    2014-01-01

    The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.

  6. Qualitative to Quantitative and Spectrum to Report: An Instrument-Focused Research Methods Course for First-Year Students

    Science.gov (United States)

    Thomas, Alyssa C.; Boucher, Michelle A.; Pulliam, Curtis R.

    2015-01-01

    Our Introduction to Research Methods course is a first-year majors course built around the idea of helping students learn to work like chemists, write like chemists, and think like chemists. We have developed this course as a hybrid hands-on/ lecture experience built around instrumentation use and report preparation. We take the product from one…

  7. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    Science.gov (United States)

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  8. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  9. Control Method for Variable Speed Wind Turbines to Support Temporary Primary Frequency Control

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2014-01-01

    This paper develops a control method for variable speed wind turbines (VSWTs) to support temporary primary frequency control of power system. The control method contains two parts: (1) up-regulate support control when a frequency drop event occurs; (2) down-regulate support control when a frequen...

  10. A design method of compensators for multi-variable control system with PID controllers 'CHARLY'

    International Nuclear Information System (INIS)

    Fujiwara, Toshitaka; Yamada, Katsumi

    1985-01-01

    A systematic design method of compensators for a multi-variable control system having usual PID controllers in its loops is presented in this paper. The method itself is able: to determine the main manipulating variable corresponding to each controlled variable with a sensitivity analysis in the frequency domain. to tune PID controllers sufficiently to realize adequate control actions with a searching technique of minimum values of cost functionals. to design compensators improving the control preformance and to simulate a total system for confirming the designed compensators. In the phase of compensator design, the state variable feed-back gain is obtained by means of the OPTIMAL REGULATOR THEORY for the composite system of plant and PID controllers. The transfer function type compensators the configurations of which were previously given are, then, designed to approximate the frequency responces of the above mentioned state feed-back system. An example is illustrated for convenience. (author)

  11. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  12. The application of seasonal latent variable in forecasting electricity demand as an alternative method

    International Nuclear Information System (INIS)

    Sumer, Kutluk Kagan; Goktas, Ozlem; Hepsag, Aycan

    2009-01-01

    In this study, we used ARIMA, seasonal ARIMA (SARIMA) and alternatively the regression model with seasonal latent variable in forecasting electricity demand by using data that belongs to 'Kayseri and Vicinity Electricity Joint-Stock Company' over the 1997:1-2005:12 periods. This study tries to examine the advantages of forecasting with ARIMA, SARIMA methods and with the model has seasonal latent variable to each other. The results support that ARIMA and SARIMA models are unsuccessful in forecasting electricity demand. The regression model with seasonal latent variable used in this study gives more successful results than ARIMA and SARIMA models because also this model can consider seasonal fluctuations and structural breaks

  13. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  14. A method to forecast quantitative variables relating to nuclear public acceptance

    International Nuclear Information System (INIS)

    Ohnishi, T.

    1992-01-01

    A methodology is proposed for forecasting the future trend of quantitative variables profoundly related to the public acceptance (PA) of nuclear energy. The social environment influencing PA is first modeled by breaking it down into a finite number of fundamental elements and then the interactive formulae between the quantitative variables, which are attributed to and characterize each element, are determined by using the actual values of the variables in the past. Inputting the estimated values of exogenous variables into these formulae, the forecast values of endogenous variables can finally be obtained. Using this method, the problem of nuclear PA in Japan is treated as, for example, where the context is considered to comprise a public sector and the general social environment and socio-psychology. The public sector is broken down into three elements of the general public, the inhabitants living around nuclear facilities and the activists of anti-nuclear movements, whereas the social environment and socio-psychological factors are broken down into several elements, such as news media and psychological factors. Twenty-seven endogenous and seven exogenous variables are introduced to quantify these elements. After quantitatively formulating the interactive features between them and extrapolating the exogenous variables into the future estimates are made of the growth or attenuation of the endogenous variables, such as the pro- and anti-nuclear fractions in public opinion polls and the frequency of occurrence of anti-nuclear movements. (author)

  15. [Ablation on the undersurface of a LASIK flap. Instrument and method for continuous eye tracking].

    Science.gov (United States)

    Taneri, S; Azar, D T

    2007-02-01

    The risk of iatrogenic keratectasia after laser in situ keratomileusis (LASIK) increases with thinner posterior stromal beds. Ablations on the undersurface of a LASIK flap could only be performed without the guidance of an eye tracker, which may lead to decentration. A new method for laser ablation with flying spot lasers on the undersurface of a LASIK flap was developed that enables the use of an active eye tracker by utilizing a novel instrument. The first clinical results are reported. Patients wishing an enhancement procedure were eligible for a modified repeat LASIK procedure if the flaps cut in the initial procedure were thick enough to perform the intended additional ablation on the undersurface leaving at least 90 microm of flap thickness behind. (1) The horizontal axis and the center of the entrance pupil were marked on the epithelial side of the flap using gentian violet dye. (2) The flap was reflected on a newly designed flap holder which had a donut-shaped black marking. (3) The eye tracker was centered on the mark visible in transparency on the flap. (4) Ablation with a flying spot Bausch & Lomb Technolas 217z laser was performed on the undersurface of the flap with a superior hinge taking into account that in astigmatic ablations the cylinder axis had to be mirrored according to the formula: axis on the undersurface=180 degrees -axis on the stromal bed. (5) The flap was repositioned. Detection of the marking on the modified flap holder and continuous tracking instead of the real pupil was possible in all of the 12 eyes treated with this technique. It may be necessary to cover the real pupil during ablation in order not to confuse the eye tracker. Ablation could be performed without decentration or loss of best spectacle-corrected visual acuity. Refractive results in minor corrections were good without nomogram adjustment. Using this novel flap holder with a marking that is tracked instead of the real pupil, centered ablations with a flying spot laser

  16. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    J. Wesley Hines; Belle R. Upadhyaya; J. Michael Doster; Robert M. Edwards; Kenneth D. Lewis; Paul Turinsky; Jamie Coble

    2011-05-31

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  17. Resistance Torque Based Variable Duty-Cycle Control Method for a Stage II Compressor

    Science.gov (United States)

    Zhong, Meipeng; Zheng, Shuiying

    2017-07-01

    The resistance torque of a piston stage II compressor generates strenuous fluctuations in a rotational period, and this can lead to negative influences on the working performance of the compressor. To restrain the strenuous fluctuations in the piston stage II compressor, a variable duty-cycle control method based on the resistance torque is proposed. A dynamic model of a stage II compressor is set up, and the resistance torque and other characteristic parameters are acquired as the control targets. Then, a variable duty-cycle control method is applied to track the resistance torque, thereby improving the working performance of the compressor. Simulated results show that the compressor, driven by the proposed method, requires lower current, while the rotating speed and the output torque remain comparable to the traditional variable-frequency control methods. A variable duty-cycle control system is developed, and the experimental results prove that the proposed method can help reduce the specific power, input power, and working noise of the compressor to 0.97 kW·m-3·min-1, 0.09 kW and 3.10 dB, respectively, under the same conditions of discharge pressure of 2.00 MPa and a discharge volume of 0.095 m3/min. The proposed variable duty-cycle control method tracks the resistance torque dynamically, and improves the working performance of a Stage II Compressor. The proposed variable duty-cycle control method can be applied to other compressors, and can provide theoretical guidance for the compressor.

  18. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  19. Quantification and variability in colonic volume with a novel magnetic resonance imaging method

    DEFF Research Database (Denmark)

    Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke

    2015-01-01

    Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...

  20. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors.

    Science.gov (United States)

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-04-16

    Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people's quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education

  1. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  2. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  3. Unit-specific calibration of Actigraph accelerometers in a mechanical setup - is it worth the effort? The effect on random output variation caused by technical inter-instrument variability in the laboratory and in the field

    DEFF Research Database (Denmark)

    Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L

    2008-01-01

    BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...

  4. A sizing method for stand-alone PV installations with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica Para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada, Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)

  5. Supermathematics and its applications in statistical physics Grassmann variables and the method of supersymmetry

    CERN Document Server

    Wegner, Franz

    2016-01-01

    This text presents the mathematical concepts of Grassmann variables and the method of supersymmetry to a broad audience of physicists interested in applying these tools to disordered and critical systems, as well as related topics in statistical physics. Based on many courses and seminars held by the author, one of the pioneers in this field, the reader is given a systematic and tutorial introduction to the subject matter. The algebra and analysis of Grassmann variables is presented in part I. The mathematics of these variables is applied to a random matrix model, path integrals for fermions, dimer models and the Ising model in two dimensions. Supermathematics - the use of commuting and anticommuting variables on an equal footing - is the subject of part II. The properties of supervectors and supermatrices, which contain both commuting and Grassmann components, are treated in great detail, including the derivation of integral theorems. In part III, supersymmetric physical models are considered. While supersym...

  6. KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method

    International Nuclear Information System (INIS)

    Westley, G.W.

    1975-01-01

    1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user

  7. Methods and Models of Market Risk Stress-Testing of the Portfolio of Financial Instruments

    Directory of Open Access Journals (Sweden)

    Alexander M. Karminsky

    2015-01-01

    Full Text Available Amid instability of financial markets and macroeconomic situation the necessity of improving bank risk-management instrument arises. New economic reality defines the need for searching for more advanced approaches of estimating banks vulnerability to exceptional, but plausible events. Stress-testing belongs to such instruments. The paper reviews and compares the models of market risk stress-testing of the portfolio of different financial instruments. These days the topic of the paper is highly acute due to the fact that now stress-testing is becoming an integral part of anticrisis risk-management amid macroeconomic instability and appearance of new risks together with close interest to the problem of risk-aggregation. The paper outlines the notion of stress-testing and gives coverage of goals, functions of stress-tests and main criteria for market risk stress-testing classification. The paper also stresses special aspects of scenario analysis. Novelty of the research is explained by elaborating the programme of aggregated complex multifactor stress-testing of the portfolio risk based on scenario analysis. The paper highlights modern Russian and foreign models of stress-testing both on solo-basis and complex. The paper lays emphasis on the results of stress-testing and revaluations of positions for all three complex models: methodology of the Central Bank of stress-testing portfolio risk, model relying on correlations analysis and copula model. The models of stress-testing on solo-basis are different for each financial instrument. Parametric StressVaR model is applicable to shares and options stress-testing;model based on "Grek" indicators is used for options; for euroobligation regional factor model is used. Finally some theoretical recommendations about managing market risk of the portfolio are given.

  8. Hydro-ball in-core instrumentation system and method of operation

    International Nuclear Information System (INIS)

    Tower, S.N.; Veronesi, L.; Braun, H.E.

    1990-01-01

    This patent describes an instrumentation system. It is for a pressure vessel of nuclear reactor, the vessel having an outer enclosure defined by a generally cylindrical sidewall with a generally vertical central axis and upper and lower edges, and top and bottom heads secured in sealed relationship to the upper and lower edges, respectively, of the cylindrical sidewall, and the vessel enclosing therein a core including elongated fuel element assemblies mounted in parallel axial relationship

  9. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  10. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    International Nuclear Information System (INIS)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A.; Rijn, Rick R. van; Henneman, Onno D.F.; Heijmans, Jarom; Reitsma, Johannes B.

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  11. Frame, methods and instruments for energy planning in the new economic order of electricity economics

    International Nuclear Information System (INIS)

    Stigler, H.

    1999-01-01

    The introduction of the new economic order of the electricity economy causes new focal tasks for the individual market participants and therefore new requirements for planning. As a precondition for energy planning, the Internal Market Electricity Directive and the ElWOG are examined and the tasks for the market participants are derived. Liberalization raises the risks for the enterprises. Increasing competition sets up higher requirements for planning. The planning instruments have no longer the destination of minimum costs but have to maximize the results of the enterprise. Price fixing requires a raised alignment to marginal costs considerations. Increasing electricity trade requires the introduction of new planning instruments. Further new tasks refer to electricity transfer via other networks and especially to congestion management. New chances but also new risks arise for the renewable energy sources. From the market result new requirements for the planning instruments. The basics in this respect are prepared and concrete examples from practice are submitted. Models of enterprises are developed, which consist of a technical and a business part. Central importance has the modeling of competition in the liberalized market. A model of competition between enterprises in the electricity market is developed. (author)

  12. The method to Certify Performance of Long-Lived In-Core Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Kyung-ho; Cha, Kyoon-ho; Moon, Sang-rae [KHNP CRI, Daejeon (Korea, Republic of)

    2015-10-15

    Rh ICI (In-Core Instrumentation) used in OPR1000 generates the relatively large signal but its lifetime is below 6 years. Rh ICI consists of 5 detectors which is a type of SPND (Self Powered Neutron Detector), a couple of thermo-couple, one background wire and several fillers. The short lifetime of Rh detector causes increase of procurement price and space shortage of spent fuel pool. Also, it makes operators be exposed by more radiations. KHNP (Korea Hydro and Nuclear Power Co., Ltd.) CRI (Central Research Institute) is developing the LLICI (Long-Lived In-Core Instrumentation) based on vanadium to solve these problems. LLICI is the detector which is a type of SPND based on Vanadium and has the lifetime of about 10 years. The short lifetime of OPR1000's Rh ICI and long cycle operation strategy cause increase of procurement price, space shortage of spent fuel pool and more radiation exposed to operators. KHNP (Korea Hydro and Nuclear Power Co., Ltd.) CRI (Central Research Institute) is developing the LLICI (Long-Lived In-Core Instrumentation) to solve these problems.

  13. Method for controlling a coolant liquid surface of cooling system instruments in an atomic power plant

    International Nuclear Information System (INIS)

    Monta, Kazuo.

    1974-01-01

    Object: To prevent coolant inventory within a cooling system loop in an atomic power plant from being varied depending on loads thereby relieving restriction of varied speed of coolant flow rate to lowering of a liquid surface due to short in coolant. Structure: Instruments such as a superheater, an evaporator, and the like, which constitute a cooling system loop in an atomic power plant, have a plurality of free liquid surface of coolant. Portions whose liquid surface is controlled and portions whose liquid surface is varied are adjusted in cross-sectional area so that the sum total of variation in coolant inventory in an instrument such as a superheater provided with an annulus portion in the center thereof and an inner cylindrical portion and a down-comer in the side thereof comes equal to that of variation in coolant inventory in an instrument such as an evaporator similar to the superheater. which is provided with an overflow pipe in its inner cylindrical portion or down-comer, thereby minimizing variation in coolant inventory of the entire coolant due to loads thus minimizing variation in varied speed of the coolant. (Kamimura, M.)

  14. A method based on a separation of variables in magnetohydrodynamics (MHD)

    International Nuclear Information System (INIS)

    Cessenat, M.; Genta, P.

    1996-01-01

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables φ1, φ2, φ3 in addition of the time variable τ and then searching a solution which is separated with respect to φ1 and τ only. This is allowed by a very simple relation, called a 'metric separation equation', which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H Σ of the magnetic field on the unit sphere Σ by solving a non linear partial differential equation on Σ. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors)

  15. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  16. Approaches for developing a sizing method for stand-alone PV systems with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada. Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)

  17. Instrumenting an upland research catchment in Canterbury, New Zealand to study controls on variability of soil moisture, shallow groundwater and streamflow

    Science.gov (United States)

    McMillan, Hilary; Srinivasan, Ms

    2015-04-01

    Hydrologists recognise the importance of vertical drainage and deep flow paths in runoff generation, even in headwater catchments. Both soil and groundwater stores are highly variable over multiple scales, and the distribution of water has a strong control on flow rates and timing. In this study, we instrumented an upland headwater catchment in New Zealand to measure the temporal and spatial variation in unsaturated and saturated-zone responses. In NZ, upland catchments are the source of much of the water used in lowland agriculture, but the hydrology of such catchments and their role in water partitioning, storage and transport is poorly understood. The study area is the Langs Gully catchment in the North Branch of the Waipara River, Canterbury: this catchment was chosen to be representative of the foothills environment, with lightly managed dryland pasture and native Matagouri shrub vegetation cover. Over a period of 16 months we measured continuous soil moisture at 32 locations and near-surface water table (versus hillslope locations, and convergent versus divergent hillslopes. We found that temporal variability is strongly controlled by the climatic seasonal cycle, for both soil moisture and water table, and for both the mean and extremes of their distributions. Groundwater is a larger water storage component than soil moisture, and the difference increases with catchment wetness. The spatial standard deviation of both soil moisture and groundwater is larger in winter than in summer. It peaks during rainfall events due to partial saturation of the catchment, and also rises in spring as different locations dry out at different rates. The most important controls on spatial variability are aspect and distance from stream. South-facing and near-stream locations have higher water tables and more, larger soil moisture wetting events. Typical hydrological models do not explicitly account for aspect, but our results suggest that it is an important factor in hillslope

  18. Comparison of real-time instruments and gravimetric method when measuring particulate matter in a residential building.

    Science.gov (United States)

    Wang, Zuocheng; Calderón, Leonardo; Patton, Allison P; Sorensen Allacci, MaryAnn; Senick, Jennifer; Wener, Richard; Andrews, Clinton J; Mainelis, Gediminas

    2016-11-01

    This study used several real-time and filter-based aerosol instruments to measure PM 2.5 levels in a high-rise residential green building in the Northeastern US and compared performance of those instruments. PM 2.5 24-hr average concentrations were determined using a Personal Modular Impactor (PMI) with 2.5 µm cut (SKC Inc., Eighty Four, PA) and a direct reading pDR-1500 (Thermo Scientific, Franklin, MA) as well as its filter. 1-hr average PM 2.5 concentrations were measured in the same apartments with an Aerotrak Optical Particle Counter (OPC) (model 8220, TSI, Inc., Shoreview, MN) and a DustTrak DRX mass monitor (model 8534, TSI, Inc., Shoreview, MN). OPC and DRX measurements were compared with concurrent 1-hr mass concentration from the pDR-1500. The pDR-1500 direct reading showed approximately 40% higher particle mass concentration compared to its own filter (n = 41), and 25% higher PM 2.5 mass concentration compared to the PMI 2.5 filter. The pDR-1500 direct reading and PMI 2.5 in non-smoking homes (self-reported) were not significantly different (n = 10, R 2 = 0.937), while the difference between measurements for smoking homes was 44% (n = 31, R 2 = 0.773). Both OPC and DRX data had substantial and significant systematic and proportional biases compared with pDR-1500 readings. However, these methods were highly correlated: R 2 = 0.936 for OPC versus pDR-1500 reading and R 2 = 0.863 for DRX versus pDR-1500 reading. The data suggest that accuracy of aerosol mass concentrations from direct-reading instruments in indoor environments depends on the instrument, and that correction factors can be used to reduce biases of these real-time monitors in residential green buildings with similar aerosol properties. This study used several real-time and filter-based aerosol instruments to measure PM 2.5 levels in a high-rise residential green building in the northeastern United States and compared performance of those instruments. The data show that while the use of real

  19. A fast collocation method for a variable-coefficient nonlocal diffusion model

    Science.gov (United States)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  20. The application of variable sampling method in the audit testing of insurance companies' premium income

    Directory of Open Access Journals (Sweden)

    Jovković Biljana

    2012-12-01

    Full Text Available The aim of this paper is to present the procedure of audit sampling using the variable sampling methods for conducting the tests of income from insurance premiums in insurance company 'Takovo'. Since the incomes from the insurance premiums from vehicle insurance and third-party vehicle insurance have the dominant share of the insurance company's income, the application of this method will be shown in the audit examination of these incomes - incomes from VI and TPVI premiums. For investigating the applicability of these methods in testing the income of other insurance companies, we shall implement the method of variable sampling in the audit testing of the premium income from the three leading insurance companies in Serbia, 'Dunav', 'DDOR' and 'Delta Generali' Insurance.

  1. A Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for the Surface of Mars: An Instrument for the Planetary Science Community

    Science.gov (United States)

    Edmunson, J.; Gaskin, J. A.; Danilatos, G.; Doloboff, I. J.; Effinger, M. R.; Harvey, R. P.; Jerman, G. A.; Klein-Schoder, R.; Mackie, W.; Magera, B.; hide

    2016-01-01

    The Miniaturized Variable Pressure Scanning Electron Microscope(MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Science (ROSES), will build upon previous miniaturized SEM designs for lunar and International Space Station (ISS) applications and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. By the end of the PICASSO work, a prototype of the primary proof-of-concept components (i.e., the electron gun, focusing optics and scanning system)will be assembled and preliminary testing in a Mars analog chamber at the Jet Propulsion Laboratory will be completed to partially fulfill Technology Readiness Level to 5 requirements for those components. The team plans to have Secondary Electron Imaging(SEI), Backscattered Electron (BSE) detection, and Energy Dispersive Spectroscopy (EDS) capabilities through the MVP-SEM.

  2. A method to standardize gait and balance variables for gait velocity.

    NARCIS (Netherlands)

    Iersel, M.B. van; Olde Rikkert, M.G.M.; Borm, G.F.

    2007-01-01

    Many gait and balance variables depend on gait velocity, which seriously hinders the interpretation of gait and balance data derived from walks at different velocities. However, as far as we know there is no widely accepted method to correct for effects of gait velocity on other gait and balance

  3. Modified quasi-boundary value method for Cauchy problems of elliptic equations with variable coefficients

    Directory of Open Access Journals (Sweden)

    Hongwu Zhang

    2011-08-01

    Full Text Available In this article, we study a Cauchy problem for an elliptic equation with variable coefficients. It is well-known that such a problem is severely ill-posed; i.e., the solution does not depend continuously on the Cauchy data. We propose a modified quasi-boundary value regularization method to solve it. Convergence estimates are established under two a priori assumptions on the exact solution. A numerical example is given to illustrate our proposed method.

  4. Standard test method for verifying the alignment of X-Ray diffraction instrumentation for residual stress measurement

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the preparation and use of a flat stress-free test specimen for the purpose of checking the systematic error caused by instrument misalignment or sample positioning in X-ray diffraction residual stress measurement, or both. 1.2 This test method is applicable to apparatus intended for X-ray diffraction macroscopic residual stress measurement in polycrystalline samples employing measurement of a diffraction peak position in the high-back reflection region, and in which the θ, 2θ, and ψ rotation axes can be made to coincide (see Fig. 1). 1.3 This test method describes the use of iron powder which has been investigated in round-robin studies for the purpose of verifying the alignment of instrumentation intended for stress measurement in ferritic or martensitic steels. To verify instrument alignment prior to stress measurement in other metallic alloys and ceramics, powder having the same or lower diffraction angle as the material to be measured should be prepared in similar fashion...

  5. Improved methods for signal processing in measurements of mercury by Tekran® 2537A and 2537B instruments

    Science.gov (United States)

    Ambrose, Jesse L.

    2017-12-01

    Atmospheric Hg measurements are commonly carried out using Tekran® Instruments Corporation's model 2537 Hg vapor analyzers, which employ gold amalgamation preconcentration sampling and detection by thermal desorption (TD) and atomic fluorescence spectrometry (AFS). A generally overlooked and poorly characterized source of analytical uncertainty in those measurements is the method by which the raw Hg atomic fluorescence (AF) signal is processed. Here I describe new software-based methods for processing the raw signal from the Tekran® 2537 instruments, and I evaluate the performances of those methods together with the standard Tekran® internal signal processing method. For test datasets from two Tekran® instruments (one 2537A and one 2537B), I estimate that signal processing uncertainties in Hg loadings determined with the Tekran® method are within ±[1 % + 1.2 pg] and ±[6 % + 0.21 pg], respectively. I demonstrate that the Tekran® method can produce significant low biases (≥ 5 %) not only at low Hg sample loadings (< 5 pg) but also at tropospheric background concentrations of gaseous elemental mercury (GEM) and total mercury (THg) (˜ 1 to 2 ng m-3) under typical operating conditions (sample loadings of 5-10 pg). Signal processing uncertainties associated with the Tekran® method can therefore represent a significant unaccounted for addition to the overall ˜ 10 to 15 % uncertainty previously estimated for Tekran®-based GEM and THg measurements. Signal processing bias can also add significantly to uncertainties in Tekran®-based gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurements, which often derive from Hg sample loadings < 5 pg. In comparison, estimated signal processing uncertainties associated with the new methods described herein are low, ranging from within ±0.053 pg, when the Hg thermal desorption peaks are defined manually, to within ±[2 % + 0.080 pg] when peak definition is automated. Mercury limits of detection (LODs

  6. Instrument and method for X-ray diffraction, fluorescence, and crystal texture analysis without sample preparation

    Science.gov (United States)

    Gendreau, Keith (Inventor); Martins, Jose Vanderlei (Inventor); Arzoumanian, Zaven (Inventor)

    2010-01-01

    An X-ray diffraction and X-ray fluorescence instrument for analyzing samples having no sample preparation includes a X-ray source configured to output a collimated X-ray beam comprising a continuum spectrum of X-rays to a predetermined coordinate and a photon-counting X-ray imaging spectrometer disposed to receive X-rays output from an unprepared sample disposed at the predetermined coordinate upon exposure of the unprepared sample to the collimated X-ray beam. The X-ray source and the photon-counting X-ray imaging spectrometer are arranged in a reflection geometry relative to the predetermined coordinate.

  7. Cultural Heritage Digitalization on Traditional Sundanese Music Instrument Using Augmented Reality Markerless Marker Method

    Directory of Open Access Journals (Sweden)

    Budi Arifitama

    2017-07-01

    Full Text Available Research into cultural heritage which implements augmented reality technology is limited. Most recent research on cultural heritage are limited on storing data and information in the form of databases, this creates a disadvantage for people who wants to see and feel at the same moment on actual cultural heritage objects. This paper, proposes a solution which could merge the existing cultural object with people using augmented reality technology. This technology would preserve traditional instrument in the form of 3D object which can be digitally protected. The result showed that the use of augmented reality on preserving cultural heritage would benefit people who try to protect their culture.

  8. Instrument and method for focusing X-rays, gamma rays and neutrons

    International Nuclear Information System (INIS)

    1982-01-01

    A crystal diffraction instrument is described with an improved crystalline structure having a face for receiving a beam of photons or neutrons and diffraction planar spacing along that face with the spacing increasing progressively along the face to provide a decreasing Bragg angle and thereby increasing the usable area and acceptance angle. The increased planar spacing is provided by the use of a temperature differential across the crystalline structure, by assembling a plurality of crystalline structure with different compositions, by an individual crystalline structure with a varying composition and thereby a changing planar spacing along its face, and by combinations of these techniques. (Auth.)

  9. Laminate for use in instrument dials or hands and method of making laminate

    International Nuclear Information System (INIS)

    Westland, J.M.; Crowther, A.

    1981-01-01

    A translucent sheet of PVC has a coating e.g. of black ink or luminous material, with apertures and optionally luminous or non-luminous indicia. Behind the apertures there are tritium-activated luminous indicia or markings which are covered by an opaque white sheet. A self-adhesive protective film may be temporarily applied to the coating. The laminated structure may be used for faces or hands in time-pieces or other instruments. The use of the white sheet and protective film prevents operatives coming into contact with luminous materials. (author)

  10. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  11. An improved method for determining the purity of jet fuels in a POZ-TU instrument

    Energy Technology Data Exchange (ETDEWEB)

    Zrelov, V N; Fedotkin, B I; Krasnaya, L V; Nikitin, L V; Postinkova, N G

    1983-01-01

    The possibility is studied of real time testing for the content of mechanical impurities (Cm.t.) in jet fuels (RT) in a POZ-TU instrument. Based on the obtained data, a four point scale of gray standards is developed for determining the mechanical impurity content, which is four rounded, gray stamps of different intensity, which corresponds to a mechanical impurity content of 0.5; 1.0; 2.0 and 3.0 milligrams per liter. A white indicator filtering element is built into the POZ-TU for determining the mechanical impurity content, and 50 cubic centimeters of the jet fuel are pumped through it over the course of several seconds. The mechanical impurities are placed on the indicator element, forming an imprint, the intensity of the color of which corresponds to the content of mechanical impurities in the jet fuel. The indicator element is extracted from the instrument and the prints are compared with the scale of gray standards, from which the content of the mechanical impurities is determined.

  12. Original method to compute epipoles using variable homography: application to measure emergent fibers on textile fabrics

    Science.gov (United States)

    Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise

    2012-04-01

    Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.

  13. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  14. A novel variable baseline visibility detection system and its measurement method

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan

    2017-10-01

    As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.

  15. A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System

    Directory of Open Access Journals (Sweden)

    Xu Hui-xiang

    2016-01-01

    Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.

  16. Application of a primitive variable Newton's method for the calculation of an axisymmetric laminar diffusion flame

    International Nuclear Information System (INIS)

    Xu, Yuenong; Smooke, M.D.

    1993-01-01

    In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation

  17. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Science.gov (United States)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  18. Nuclear medicine and imaging research. Instrumentation and quantitative methods of evaluation. Progress report, January 15, 1984-January 14, 1985

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1984-09-01

    This program addresses problems involving the basic science and technology of radioactive tracer methods as they relate to nuclear medicine and imaging. The broad goal is to develop new instruments and methods for image formation, processing, quantitation and display, so as to maximize the diagnostic information per unit of absorbed radiation dose to the patient. Project I addresses problems associated with the quantitative imaging of single-photon emitters; Project II addresses similar problems associated with the quantitative imaging of positron emitters; Project III addresses methodological problems associated with the quantitative evaluation of the efficacy of diagnostic imaging procedures

  19. A development and integration of the concentration database for relative method, k0 method and absolute method in instrumental neutron activation analysis using Microsoft Access

    International Nuclear Information System (INIS)

    Hoh Siew Sin

    2012-01-01

    Instrumental Neutron Activation Analysis (INAA) is offen used to determine and calculate the concentration of an element in the sample by the National University of Malaysia, especially students of Nuclear Science Program. The lack of a database service leads consumers to take longer time to calculate the concentration of an element in the sample. This is because we are more dependent on software that is developed by foreign researchers which are costly. To overcome this problem, a study has been carried out to build an INAA database software. The objective of this study is to build a database software that help the users of INAA in Relative Method and Absolute Method for calculating the element concentration in the sample using Microsoft Excel 2010 and Microsoft Access 2010. The study also integrates k 0 data, k 0 Concent and k 0 -Westcott to execute and complete the system. After the integration, a study was conducted to test the effectiveness of the database software by comparing the concentrations between the experiments and in the database. Triple Bare Monitor Zr-Au and Cr-Mo-Au were used in Abs-INAA as monitor to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration are the net peak area (N p ), the measurement time (t m ), the irradiation time (t irr ), k-factor (k), thermal to epithermal neutron flux ratio (f), the parameters of the neutron flux distribution epithermal (α) and detection efficiency (ε p ). For Com-INAA databases, reference material IAEA-375 Soil was used to calculate the concentration of elements in the sample. CRM, SRM are also used in this database. After the INAA database integration, a verification process was to examine the effectiveness of the Abs-INAA was carried out by comparing the sample concentration between the in database and the experiment. The result of the experimental concentration value of INAA database software performed with high accuracy and precision. ICC

  20. Variability in clinical data is often more useful than the mean: illustration of concept and simple methods of assessment

    NARCIS (Netherlands)

    Zwinderman, A. H.; Cleophas, T. J.

    2005-01-01

    BACKGROUND: Clinical investigators, although they are generally familiar with testing differences between averages, have difficulty testing differences between variabilities. OBJECTIVE: To give examples of situations where variability is more relevant than averages and to describe simple methods for

  1. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  2. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  3. New Methods for Retrieval of Chlorophyll Red Fluorescence from Hyperspectral Satellite Instruments: Simulations and Application to GOME-2 and SCIAMACHY

    Science.gov (United States)

    Joiner, Joanna; Yoshida, Yasuko; Guanter, Luis; Middleton, Elizabeth M.

    2016-01-01

    Global satellite measurements of solar-induced fluorescence (SIF) from chlorophyll over land and ocean have proven useful for a number of different applications related to physiology, phenology, and productivity of plants and phytoplankton. Terrestrial chlorophyll fluorescence is emitted throughout the red and far-red spectrum, producing two broad peaks near 683 and 736nm. From ocean surfaces, phytoplankton fluorescence emissions are entirely from the red region (683nm peak). Studies using satellite-derived SIF over land have focused almost exclusively on measurements in the far red (wavelengths greater than 712nm), since those are the most easily obtained with existing instrumentation. Here, we examine new ways to use existing hyperspectral satellite data sets to retrieve red SIF (wavelengths less than 712nm) over both land and ocean. Red SIF is thought to provide complementary information to that from the far red for terrestrial vegetation. The satellite instruments that we use were designed to make atmospheric trace-gas measurements and are therefore not optimal for observing SIF; they have coarse spatial resolution and only moderate spectral resolution (0.5nm). Nevertheless, these instruments, the Global Ozone Monitoring Instrument 2 (GOME-2) and the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY), offer a unique opportunity to compare red and far-red terrestrial SIF at regional spatial scales. Terrestrial SIF has been estimated with ground-, aircraft-, or satellite-based instruments by measuring the filling-in of atmospheric andor solar absorption spectral features by SIF. Our approach makes use of the oxygen (O2) gamma band that is not affected by SIF. The SIF-free O2 gamma band helps to estimate absorption within the spectrally variable O2 B band, which is filled in by red SIF. SIF also fills in the spectrally stable solar Fraunhofer lines (SFLs) at wavelengths both inside and just outside the O2 B band, which further helps

  4. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    International Nuclear Information System (INIS)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  5. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    Energy Technology Data Exchange (ETDEWEB)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt [SwedPower AB, Stockholm (Sweden)

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  6. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  7. Propulsion and launching analysis of variable-mass rockets by analytical methods

    Directory of Open Access Journals (Sweden)

    D.D. Ganji

    2013-09-01

    Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.

  8. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    Science.gov (United States)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  9. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A.

    2008-12-17

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses on validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.

  10. Online calibration method for condition monitoring of nuclear reactor instrumentations based on electrical signature analysis

    International Nuclear Information System (INIS)

    Syaiful Bakhri

    2013-01-01

    Electrical signature analysis currently becomes an alternative in condition monitoring in nuclear power plants not only for stationary components such as sensors, measurement and instrumentation channels, and other components but also for dynamic components such as electric motors, pumps, generator or actuators. In order to guarantee the accuracy, the calibration of monitoring system is a necessary which practically is performed offline, under limited schedules and certain tight procedures. This research aims to introduce online calibration technique for electrical signature condition monitoring in order that the accuracy can be maintained continuously which in turn increases the reactor safety as a whole. The research was performed step by stepin detail from the conventional technique, online calibration using baseline information and online calibration using differential gain adjustment. Online calibration based on differential gain adjustment provides better results than other techniques even tough under extreme gain insertion as well as external disturbances such as supply voltages. (author)

  11. Microcontroller based instrumentation for the fuel pin preparation facility by sol-gel method

    International Nuclear Information System (INIS)

    Suhasini, B.; Prabhakar Rao, J.; Srinivas, K.C.

    2009-01-01

    The fuel pin preparation facility by Sol-Gel route has been set up at Chemistry Group at Indira Gandhi Centre for Atomic Research, Kalpakkam. Sol-Gel, a solution-gelation process involves conversion of solutions of nitrates of uranium-plutonium (at 0 deg C) into gel microspheres. To measure the exact quantities of the above solutions and to ensure their temperatures, a variety of sensors have been used at various stages in the plant. To monitor and acquire the data of process parameters used in the production and for an automated operation of the plant, a PC (master)-microcontroller (slave) based instrumentation has been developed along with acquisition software and a GU interface developed in Visual Basic. (author)

  12. New instruments and methods for high precision thermocouple and platinum resistance thermometry

    International Nuclear Information System (INIS)

    Corradi, F.

    1977-01-01

    The paper describes the development of measuring instruments for the following purposes: 1) Measurement of the super-heated steam temperature, close to 550 0 C, in a tube at approximately 200 Kg/cm 2 , with a total accuracy of +-0.1 0 C. 2) Measurement of the super-heated water temperature, close to 350 0 C, still with a total accuracy of +-0.1 0 C. 3) Measurement of temperature differences between the inlet and the outlet of the water in the supply channel. The mean temperature was close to 15 0 C and the differential span was required to be 0.5 0 C with a total accuracy of +-0.005 0 C. (orig./RW) [de

  13. Neutron and synchrotron radiation for condensed matter studies. Volume 1: theory, instruments and methods

    International Nuclear Information System (INIS)

    Baruchel, J.; Hodeau, J.L.; Lehmann, M.S.; Regnard, J.R.; Schlenker, C.

    1993-01-01

    This book provides the basic information required by a research scientist wishing to undertake studies using neutrons or synchrotron radiation at a Large Facility. These lecture notes result from 'HERCULES', a course that has been held in Grenoble since 1991 to train young scientists in these fields. They cover the production of neutrons and synchrotron radiation and describe all aspects of instrumentation. In addition, this work outlines the basics of the various fields of research pursued at these Large Facilities. It consists of a series of chapters written by experts in the particular fields. While following a progression and constituting a lecture course on neutron and x-ray scattering, these chapters can also be read independently. This first volume will be followed by two further volumes concerned with the applications to solid state physics and chemistry, and to biology and soft condensed matter properties

  14. Comparation of instrumental and sensory methods in fermented milk beverages texture quality analysis

    Directory of Open Access Journals (Sweden)

    Jovica Hardi

    2001-04-01

    Full Text Available The texture of the curd of fermented dairy products is one of the primary factors of their overall quality. The flow properties of fermented dairy products have characteristic of thixotropic (pseudoplastic type of liquids. At the same time, these products are viscoelastic systems, i.e. they are capable of texture renewal after applied deformation. Complex analysis of some of the properties is essentional for the system description . The aim of the present work was to completely describe the texture of fermented milk beverages . Three basic parameters were taken into consideration: structure, hardness (consistency and stability of the curd. The description model of these three parameters was applied on the basis of the experimental results obteined. Results obtained by present model were compared with the results of sensory analysis. Influence of milk fat content and skimmed milk powder addition on acidophilus milk texture quality was also examined using this model. It was shawn that, by using this model – on the basis of instrumental and sensory analyses, a complete and objective determination of texture quality of the fermented milk beverages can be obtained. High degree of correlation between instrumental and sensory results (r =0.8975 is obtained results of this work indicated that both factors (milk fat content and skimmed milk powder addition had an influence on texture quality. Samples with higher milk fat content had a better texture properties in comparsion with low fat content samples. Texture of all examined samples was improved by increasing skimmed milk powder content. Optimal amounts of skimmed milk powder addition with regard to milk fat content, in milk, is determined using the proposed model.

  15. Evaluation of Rock Powdering Methods to Obtain Fine-grained Samples for CHEMIN, a Combined XRD/XRF Instrument

    Science.gov (United States)

    Chipera, S. J.; Vaniman, D. T.; Bish, D. L.; Sarrazin, P.; Feldman, S.; Blake, D. F.; Bearman, G.; Bar-Cohen, Y.

    2004-01-01

    A miniature XRD/XRF (X-ray diffraction / X-ray fluorescence) instrument, CHEMIN, is currently being developed for definitive mineralogic analysis of soils and rocks on Mars. One of the technical issues that must be addressed to enable remote XRD analysis is how best to obtain a representative sample powder for analysis. For powder XRD analyses, it is beneficial to have a fine-grained sample to reduce preferred orientation effects and to provide a statistically significant number of crystallites to the X-ray beam. Although a two-dimensional detector as used in the CHEMIN instrument will produce good results even with poorly prepared powder, the quality of the data will improve and the time required for data collection will be reduced if the sample is fine-grained and randomly oriented. A variety of methods have been proposed for XRD sample preparation. Chipera et al. presented grain size distributions and XRD results from powders generated with an Ultrasonic/Sonic Driller/Corer (USDC) currently being developed at JPL. The USDC was shown to be an effective instrument for sampling rock to produce powder suitable for XRD. In this paper, we compare powder prepared using the USDC with powder obtained with a miniaturized rock crusher developed at JPL and with powder obtained with a rotary tungsten carbide bit to powders obtained from a laboratory bench-scale Retsch mill (provides benchmark mineralogical data). These comparisons will allow assessment of the suitability of these methods for analysis by an XRD/XRF instrument such as CHEMIN.

  16. Preference-based disease-specific health-related quality of life instrument for glaucoma: a mixed methods study protocol

    Science.gov (United States)

    Muratov, Sergei; Podbielski, Dominik W; Jack, Susan M; Ahmed, Iqbal Ike K; Mitchell, Levine A H; Baltaziak, Monika; Xie, Feng

    2016-01-01

    Introduction A primary objective of healthcare services is to improve patients' health and health-related quality of life (HRQoL). Glaucoma, which affects a substantial proportion of the world population, has a significant detrimental impact on HRQoL. Although there are a number of glaucoma-specific questionnaires to measure HRQoL, none is preference-based which prevent them from being used in health economic evaluation. The proposed study is aimed to develop a preference-based instrument that is capable of capturing important effects specific to glaucoma and treatments on HRQoL and is scored based on the patients' preferences. Methods A sequential, exploratory mixed methods design will be used to guide the development and evaluation of the HRQoL instrument. The study consists of several stages to be implemented sequentially: item identification, item selection, validation and valuation. The instrument items will be identified and selected through a literature review and the conduct of a qualitative study. Validation will be conducted to establish psychometric properties of the instrument followed by a valuation exercise to derive utility scores for the health states described. Ethics and dissemination This study has been approved by the Trillium Health Partners Research Ethics Board (ID number 753). All personal information will be de-identified with the identification code kept in a secured location including the rest of the study data. Only qualified and study-related personnel will be allowed to access the data. The results of the study will be distributed widely through peer-reviewed journals, conferences and internal meetings. PMID:28186941

  17. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  18. Use of a variable tracer infusion method to determine glucose turnover in humans

    International Nuclear Information System (INIS)

    Molina, J.M.; Baron, A.D.; Edelman, S.V.; Brechtel, G.; Wallace, P.; Olefsky, J.M.

    1990-01-01

    The single-compartment pool fraction model, when used with the hyperinsulinemic glucose clamp technique to measure rates of glucose turnover, sometimes underestimates true rates of glucose appearance (Ra) resulting in negative values for hepatic glucose output (HGO). We focused our attention on isotope discrimination and model error as possible explanations for this underestimation. We found no difference in [3-3H] glucose specific activity in samples obtained simultaneously from the femoral artery and vein (2,400 +/- 455 vs. 2,454 +/- 522 dpm/mg) in 6 men during a hyperinsulinemic euglycemic clamp study where insulin was infused at 40 mU.m-2.min-1 for 3 h; therefore, isotope discrimination did not occur. We compared the ability of a constant (0.6 microCi/min) vs. variable tracer infusion method (tracer added to the glucose infusate) to measure non-steady-state Ra during hyperinsulinemic clamp studies. Plasma specific activity fell during the constant tracer infusion studies but did not change from base line during the variable tracer infusion studies. By maintaining a constant plasma specific activity the variable tracer infusion method eliminates uncertainty about changes in glucose pool size. This overcame modeling error and more accurately measures non-steady-state Ra (P less than 0.001 by analysis of variance vs. constant infusion method). In conclusion, underestimation of Ra determined isotopically during hyperinsulinemic clamp studies is largely due to modeling error that can be overcome by use of the variable tracer infusion method. This method allows more accurate determination of Ra and HGO under non-steady-state conditions

  19. The relationship between glass ceiling and power distance as a cultural variable by a new method

    Directory of Open Access Journals (Sweden)

    Naide Jahangirov

    2015-12-01

    Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.

  20. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A

    2009-05-27

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.

  1. Improved flux calculations for viscous incompressible flow by the variable penalty method

    International Nuclear Information System (INIS)

    Kheshgi, H.; Luskin, M.

    1985-01-01

    The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references

  2. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  3. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  4. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  5. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  6. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  7. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  8. Does social trust increase willingness to pay taxes to improve public healthcare? Cross-sectional cross-country instrumental variable analysis.

    Science.gov (United States)

    Habibov, Nazim; Cheung, Alex; Auchynnikava, Alena

    2017-09-01

    The purpose of this paper is to investigate the effect of social trust on the willingness to pay more taxes to improve public healthcare in post-communist countries. The well-documented association between higher levels of social trust and better health has traditionally been assumed to reflect the notion that social trust is positively associated with support for public healthcare system through its encouragement of cooperative behaviour, social cohesion, social solidarity, and collective action. Hence, in this paper, we have explicitly tested the notion that social trust contributes to an increase in willingness to financially support public healthcare. We use micro data from the 2010 Life-in-Transition survey (N = 29,526). Classic binomial probit and instrumental variables ivprobit regressions are estimated to model the relationship between social trust and paying more taxes to improve public healthcare. We found that an increase in social trust is associated with a greater willingness to pay more taxes to improve public healthcare. From the perspective of policy-making, healthcare administrators, policy-makers, and international donors should be aware that social trust is an important factor in determining the willingness of the population to provide much-needed financial resources to supporting public healthcare. From a theoretical perspective, we found that estimating the effect of trust on support for healthcare without taking confounding and measurement error problems into consideration will likely lead to an underestimation of the true effect of trust. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Impact on mortality of prompt admission to critical care for deteriorating ward patients: an instrumental variable analysis using critical care bed strain.

    Science.gov (United States)

    Harris, Steve; Singer, Mervyn; Sanderson, Colin; Grieve, Richard; Harrison, David; Rowan, Kathryn

    2018-05-07

    To estimate the effect of prompt admission to critical care on mortality for deteriorating ward patients. We performed a prospective cohort study of consecutive ward patients assessed for critical care. Prompt admissions (within 4 h of assessment) were compared to a 'watchful waiting' cohort. We used critical care strain (bed occupancy) as a natural randomisation event that would predict prompt transfer to critical care. Strain was classified as low, medium or high (2+, 1 or 0 empty beds). This instrumental variable (IV) analysis was repeated for the subgroup of referrals with a recommendation for critical care once assessed. Risk-adjusted 90-day survival models were also constructed. A total of 12,380 patients from 48 hospitals were available for analysis. There were 2411 (19%) prompt admissions (median delay 1 h, IQR 1-2) and 9969 (81%) controls; 1990 (20%) controls were admitted later (median delay 11 h, IQR 6-26). Prompt admissions were less frequent (p care. In the risk-adjust survival model, 90-day mortality was similar. After allowing for unobserved prognostic differences between the groups, we find that prompt admission to critical care leads to lower 90-day mortality for patients assessed and recommended to critical care.

  10. On use of ZPR research reactors and associated instrumentation and measurement methods for reactor physics studies

    Energy Technology Data Exchange (ETDEWEB)

    Chauvin, J.P. [CEA,DEN, DER, SPEX, Experimental Physics Service, Cadarache, F-13108 St-Paul-Lez-Durance (France); Blaise, P. [CEA, DEN, DER, SPEX Experimental Programs Laboratory, Cadarache, F-13108 St-Paul-Lez-Durance (France); Lyoussi, A. [CEA, DEN, DER, Instrumentation Sensors and Dosimetry Laboratory, Cadarache, F-13108 St-Paul-Lez-Durance (France)

    2015-07-01

    The French atomic and alternative energies -CEA- is strongly involved in research and development programs concerning the use of nuclear energy as a clean and reliable source of energy and consequently is working on the present and future generation of reactors on various topics such as ageing plant management, optimization of the plutonium stockpile, waste management and innovative systems exploration. Core physics studies are an essential part of this comprehensive R and D effort. In particular, the Zero Power Reactor (ZPR) of CEA: EOLE, MINERVE and MASURCA play an important role in the validation of neutron (as well photon) physics calculation tools (codes and nuclear data). The experimental programs defined in the CEA's ZPR facilities aim at improving the calculation routes by reducing the uncertainties of the experimental databases. They also provide accurate data on innovative systems in terms of new materials (moderating and decoupling materials) and new concepts (ADS, ABWR, new MTR (e.g. JHR), GENIV) involving new fuels, absorbers and coolant materials. Conducting such interesting experimental R and D programs is based on determining and measuring main parameters of phenomena of interest to qualify calculation tools and nuclear data 'libraries'. Determining these parameters relies on the use of numerous and different experimental techniques using specific and appropriate instrumentation and detection tools. Main ZPR experimental programs at CEA, their objectives and challenges will be presented and discussed. Future development and perspectives regarding ZPR reactors and associated programs will be also presented. (authors)

  11. Portable dynamic light scattering instrument and method for the measurement of blood platelet suspensions

    International Nuclear Information System (INIS)

    Maurer-Spurej, Elisabeth; Brown, Keddie; Labrie, Audrey; Marziali, Andre; Glatter, Otto

    2006-01-01

    No routine test exists to determine the quality of blood platelet transfusions although every year millions of patients require platelet transfusions to survive cancer chemotherapy, surgery or trauma. A new, portable dynamic light scattering instrument is described that is suitable for the measurement of turbid solutions of large particles under temperature-controlled conditions. The challenges of small sample size, short light path through the sample and accurate temperature control have been solved with a specially designed temperature-controlled sample holder for small diameter, disposable capillaries. Efficient heating and cooling is achieved with Peltier elements in direct contact with the sample capillary. Focusing optical fibres are used for light delivery and collection of scattered light. The practical use of this new technique was shown by the reproducible measurement of latex microspheres and the temperature-induced morphological changes of human blood platelets. The measured parameters for platelet transfusions are platelet size, number of platelet-derived microparticles and the response of platelets to temperature changes. This three-dimensional analysis provides a high degree of confidence for the determination of platelet quality. The experimental data are compared to a matrix and facilitate automated, unbiased quality testing

  12. Method of exchanging cables of neutron monitoring instrumentation tube and folding device of the cable

    International Nuclear Information System (INIS)

    Sakamaki, Kazuo.

    1990-01-01

    In a BWR type reactor, a wide range monitor (WRNM) is used instead of a conventional neutron source range monitor (SRM) or an intermediate range monitor (IRM). The WRNM is always fixed to a predetermined position in a reactor core while containing a detection section in a dry tube, different from a conventional monitor. Accordingly, driving devices for the conventional detection section such as in SRM and IRM are not necessary but, when the reactor is operated for a long period of time, it is sometimes necessary to be replaced with new WRNM. According to the present invention, the cable of the detector placed in a neutron instrumentation tube is connected to a cable take-up drum in a take-up device passing through a cask. Then, the cable is taken-up by driving the take-up drum by a driving motor and the WRNM detection section attached to the top end of the cable is contained in the cask. With this constitution, replacing and processing operation for the detection section can be facilitated and operator's exposure dose can be reduced. (I.S.)

  13. Instrumentation device at the outside of reactor and method of using the same

    International Nuclear Information System (INIS)

    Ichige, Masayuki.

    1997-01-01

    The present invention provides instrumentation device at the outside of a reactor capable of measuring conditions of inside of a reactor, such as the power of the reactor, distribution of voids or water level while considering hysteresis of neutrons or γ-rays from the inside to the outside of the reactor. Namely, a plurality of radiation detectors are disposed being elongated in vertical direction at a predetermined distance on the outer circumference of a reactor pressure vessel. The detectors detect intensity of radiation rays and the detection time at a plurality of positions of the outer side of the reactor. An amplifier amplifies the detected signals. A signal processing device determines the positions and the time of the emitted radiation rays based on the amplified detected signals. An analysis device analyzes spacial distribution and time distribution of the energy and the intensity of the radiation rays (neutron or γ-rays) based on the signals of predetermined radiation rays at the outer side of the reactor. Then, spacial and time variation components and the power distribution, water level, change of the water level, void distribution are calculated while considering decay of the radiation rays based on the distribution of material densities of incore structures. (I.S.)

  14. Application of instrumented microhardness method to follow the thermal ageing of cast duplex stainless steel

    International Nuclear Information System (INIS)

    Rezakhanlou, R.; Massoud, J.P.

    1993-03-01

    During the thermal ageing of cast duplex stainless steel the ferrite hardness largely increases. The measurement of the ferrite phase hardness can give us an indication of the level of the ageing process. But in order to have a representative value of the ferrite hardness, the applied load must be low enough. For this reason, we have used the instrumented microhardness (IMH) test which consists to measure continuously the applied load and the indentation depth during the operation. The mechanical analysis of the so called indentation curve allows us to calculate the hardness and the young modulus of the indented material for loads as low as 2 g. The results confirm the Vickers microhardness measurement under 50 g loads i.e. a sharp increase of the ferrite hardness (x 2.3 as compared to the as received state) for the highly aged sample. It should be noted that the results obtained with the IMH are completely independent of the operator. (authors). 18 refs., 7 figs., 6 tabs

  15. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  16. Methods and Instruments for the Estimation of Production Changes in Economic Evaluations

    NARCIS (Netherlands)

    Hassink, W.H.J.; van der Berg, B.

    2017-01-01

    This chapter focuses on the indirect costs of paid work that result from mental illness. It provides an overview of monetary valuation methods and approaches to measure and value production gains and losses. The methods are applied to mental illness, although they have also been applied to other

  17. Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.

    Science.gov (United States)

    Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara

    2018-02-01

    Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.

  18. Method of collective variables with reference system for the grand canonical ensemble

    International Nuclear Information System (INIS)

    Yukhnovskii, I.R.

    1989-01-01

    A method of collective variables with special reference system for the grand canonical ensemble is presented. An explicit form is obtained for the basis sixth-degree measure density needed to describe the liquid-gas phase transition. Here the author presents the fundamentals of the method, which are as follows: (1) the functional form for the partition function in the grand canonical ensemble; (2) derivation of thermodynamic relations for the coefficients of the Jacobian; (3) transition to the problem on an adequate lattice; and (4) obtaining of the explicit form for the functional of the partition function

  19. Application of Muskingum routing method with variable parameters in ungauged basin

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2011-03-01

    Full Text Available This paper describes a flood routing method applied in an ungauged basin, utilizing the Muskingum model with variable parameters of wave travel time K and weight coefficient of discharge x based on the physical characteristics of the river reach and flood, including the reach slope, length, width, and flood discharge. Three formulas for estimating parameters of wide rectangular, triangular, and parabolic cross sections are proposed. The influence of the flood on channel flow routing parameters is taken into account. The HEC-HMS hydrological model and the geospatial hydrologic analysis module HEC-GeoHMS were used to extract channel or watershed characteristics and to divide sub-basins. In addition, the initial and constant-rate method, user synthetic unit hydrograph method, and exponential recession method were used to estimate runoff volumes, the direct runoff hydrograph, and the baseflow hydrograph, respectively. The Muskingum model with variable parameters was then applied in the Louzigou Basin in Henan Province of China, and of the results, the percentages of flood events with a relative error of peak discharge less than 20% and runoff volume less than 10% are both 100%. They also show that the percentages of flood events with coefficients of determination greater than 0.8 are 83.33%, 91.67%, and 87.5%, respectively, for rectangular, triangular, and parabolic cross sections in 24 flood events. Therefore, this method is applicable to ungauged basins.

  20. Instrumental charged-particle activation analysis of several selected elements in biological materials using the internal standard method

    International Nuclear Information System (INIS)

    Yagi, M.; Masumoto, K.

    1987-01-01

    In order to study instrumental charged-particle activation analysis using the internal standard method, simultaneous determinations of several selected elements such as Ca, Ti, V, Fe, Zn, As, Sr, Zr and Mo, in oyster tissue, brewer's yeast and mussel were carried out by using the respective (p, n) reactions and a personal computer-based gamma-ray spectrometer equipped with a micro-robot for sample changing. In the determination constant amounts of Y and La were added to the sample and comparative standard as exotic internal standards. As a result, it was demonstrated that concentrations of the above elements could be determined accurately and precisely. (author)

  1. Using cognitive pre-testing methods in the development of a new evidenced-based pressure ulcer risk assessment instrument

    Directory of Open Access Journals (Sweden)

    S. Coleman

    2016-11-01

    Full Text Available Abstract Background Variation in development methods of Pressure Ulcer Risk Assessment Instruments has led to inconsistent inclusion of risk factors and concerns about content validity. A new evidenced-based Risk Assessment Instrument, the Pressure Ulcer Risk Primary Or Secondary Evaluation Tool - PURPOSE-T was developed as part of a National Institute for Health Research (NIHR funded Pressure Ulcer Research Programme (PURPOSE: RP-PG-0407-10056. This paper reports the pre-test phase to assess and improve PURPOSE-T acceptability, usability and confirm content validity. Methods A descriptive study incorporating cognitive pre-testing methods and integration of service user views was undertaken over 3 cycles comprising PURPOSE-T training, a focus group and one-to-one think-aloud interviews. Clinical nurses from 2 acute and 2 community NHS Trusts, were grouped according to job role. Focus group participants used 3 vignettes to complete PURPOSE-T assessments and then participated in the focus group. Think-aloud participants were interviewed during their completion of PURPOSE-T. After each pre-test cycle analysis was undertaken and adjustment/improvements made to PURPOSE-T in an iterative process. This incorporated the use of descriptive statistics for data completeness and decision rule compliance and directed content analysis for interview and focus group data. Data were collected April 2012-June 2012. Results Thirty-four nurses participated in 3 pre-test cycles. Data from 3 focus groups, 12 think-aloud interviews incorporating 101 PURPOSE-T assessments led to changes to improve instrument content and design, flow and format, decision support and item-specific wording. Acceptability and usability were demonstrated by improved data completion and appropriate risk pathway allocation. The pre-test also confirmed content validity with clinical nurses. Conclusions The pre-test was an important step in the development of the preliminary PURPOSE-T and the

  2. Characteristic and Competency Measurement Instrument Development for Maintenance Staff of Mechanical Expertise with SECI Method: A Case of Manufacturing Company

    Science.gov (United States)

    Mahatmavidya, P. A.; Soesanto, R. P.; Kurniawati, A.; Andrawina, L.

    2018-03-01

    Human resource is an important factor for a company to gain competitiveness, therefore competencies of each individual in a company is a basic characteristic that is taken into account. The increasing employee’s competency will affect directly to the company's performance. The purpose of this research is to improve the quality of human resources of maintenance staff in manufacturing company by designing competency measurement instrument that aims to assess the competency of employees. The focus of this research is the mechanical expertise of maintenance staff. SECI method is used in this research for managing knowledge that is held by senior employees regarding employee competence of mechanical expertise. The SECI method converts the knowledge of a person's tacit knowledge into an explicit knowledge so that the knowledge can be used by others. The knowledge that is gathered from SECI method is converted into a list of competence and break down into the detailed competency. Based on the results of this research, it is known that 11 general competencies, 17 distinctive competencies, 20 indicators, and 20 item list for assessing the competencies are developed. From the result of competency breakdown, the five-level instrument of measurement is designed which can assist in assessing employee’s competency for mechanical expertise.

  3. Instrumentation and method for measuring NIR light absorbed in tissue during MR imaging in medical NIRS measurements

    Science.gov (United States)

    Myllylä, Teemu S.; Sorvoja, Hannu S. S.; Nikkinen, Juha; Tervonen, Osmo; Kiviniemi, Vesa; Myllylä, Risto A.

    2011-07-01

    Our goal is to provide a cost-effective method for examining human tissue, particularly the brain, by the simultaneous use of functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). Due to its compatibility requirements, MRI poses a demanding challenge for NIRS measurements. This paper focuses particularly on presenting the instrumentation and a method for the non-invasive measurement of NIR light absorbed in human tissue during MR imaging. One practical method to avoid disturbances in MR imaging involves using long fibre bundles to enable conducting the measurements at some distance from the MRI scanner. This setup serves in fact a dual purpose, since also the NIRS device will be less disturbed by the MRI scanner. However, measurements based on long fibre bundles suffer from light attenuation. Furthermore, because one of our primary goals was to make the measuring method as cost-effective as possible, we used high-power light emitting diodes instead of more expensive lasers. The use of LEDs, however, limits the maximum output power which can be extracted to illuminate the tissue. To meet these requirements, we improved methods of emitting light sufficiently deep into tissue. We also show how to measure NIR light of a very small power level that scatters from the tissue in the MRI environment, which is characterized by strong electromagnetic interference. In this paper, we present the implemented instrumentation and measuring method and report on test measurements conducted during MRI scanning. These measurements were performed in MRI operating rooms housing 1.5 Tesla-strength closed MRI scanners (manufactured by GE) in the Dept. of Diagnostic Radiology at the Oulu University Hospital.

  4. Assessing data quality and the variability of source data verification auditing methods in clinical research settings.

    Science.gov (United States)

    Houston, Lauren; Probst, Yasmine; Martin, Allison

    2018-05-18

    Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.

  5. Reconstructing the Surface Permittivity Distribution from Data Measured by the CONSERT Instrument aboard Rosetta: Method and Simulations

    Science.gov (United States)

    Plettemeier, D.; Statz, C.; Hegler, S.; Herique, A.; Kofman, W. W.

    2014-12-01

    One of the main scientific objectives of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding, the lander part of CONSERT will receive and process the radio signal emitted by the orbiter part of the instrument and transmit a signal to the orbiter to be received by CONSERT. CONSERT will also be operated as bi-static RADAR during the descent of the lander Philae onto the comet's surface. From data measured during the descent, we aim at reconstructing a surface permittivity map of the comet at the landing site and along the path below the descent trajectory. This surface permittivity map will give information on the bulk material right below and around the landing site and the surface roughness in areas covered by the instrument along the descent. The proposed method to estimate the surface permittivity distribution is based on a least-squares based inversion approach in frequency domain. The direct problem of simulating the wave-propagation between lander and orbiter at line-of-sight and the signal reflected on the comet's surface is modelled using a dielectric physical optics approximation. Restrictions on the measurement positions by the descent orbitography and limitations on the instrument dynamic range will be dealt with by application of a regularization technique where the surface permittivity distribution and the gradient with regard to the permittivity is projected in a domain defined by a viable model of the spatial material and roughness distribution. The least-squares optimization step of the reconstruction is performed in such domain on a reduced set of parameters yielding stable results. The viability of the proposed method is demonstrated by reconstruction results based on simulated data.

  6. Laboratory Evaluation of Air Flow Measurement Methods for Residential HVAC Returns for New Instrument Standards

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stratton, Chris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-01

    This project improved the accuracy of air flow measurements used in commissioning California heating and air conditioning systems in Title 24 (Building and Appliance Efficiency Standards), thereby improving system performance and efficiency of California residences. The research team at Lawrence Berkeley National Laboratory addressed the issue that typical tools used by contractors in the field to test air flows may not be accurate enough to measure return flows used in Title 24 applications. The team developed guidance on performance of current diagnostics as well as a draft test method for use in future evaluations. The study team prepared a draft test method through ASTM International to determine the uncertainty of air flow measurements at residential heating ventilation and air conditioning returns and other terminals. This test method, when finalized, can be used by the Energy Commission and other entities to specify required accuracy of measurement devices used to show compliance with standards.

  7. Aespoe Hard Rock Laboratory. Characterisation methods and instruments. Experiences from the construction phase

    International Nuclear Information System (INIS)

    Almen, Karl-Erik; Stenberg, Leif

    2005-12-01

    This report describes the different investigation methods used during the Aespoe HRL construction phase which commenced 1990 and ended 1995. The investigation methods are described with respect to performance, errors, uncertainty and usefulness in determined, analysed and/or calculated parameter values or other kind of geoscientific information. Moreover, other comments of the different methods, like those related to the practical performance of the measurements or tests are given. The practical performance is a major task as most of the investigations were conducted in parallel with the construction work. Much of the wide range of investigations carried out during the tunnelling work required special efforts of the personnel involved. Experiences and comments on these operations are presented in the report. The pre-investigation methods have been evaluated by comparing predictions based on pre-investigation models with data and results from the construction phase and updated geoscientific models. In 1997 a package of reports describe the general results of the pre-investigations. The investigation methods are in this report evaluated with respect to usefulness for underground characterisation of a rock volume, concerning geological, geohydrological, hydrochemical and rock mechanical properties. The report describes out opinion of the methods after the construction phase, i.e. the same platform of knowledge as for the package of reports of 1997. The evaluation of usefulness of the underground investigation methods are structured according to the key issues used for the preinvestigation modelling and predictions, i.e. Geological-structural model, Groundwater flow (hydrogeology), Groundwater chemistry (hydrochemistry), Transport of solutes and Mechanical stability models (or rock mechanics). The investigation methods selected for the different subjects for which the predictions were made are presented. Some of the subjects were slightly modified or adjusted during

  8. An instrument for small-animal imaging using time-resolved diffuse and fluorescence optical methods

    International Nuclear Information System (INIS)

    Montcel, Bruno; Poulet, Patrick

    2006-01-01

    We describe time-resolved optical methods that use diffuse near-infrared photons to image the optical properties of tissues and their inner fluorescent probe distribution. The assembled scanner uses picosecond laser diodes at 4 wavelengths, an 8-anode photo-multiplier tube and time-correlated single photon counting. Optical absorption and reduced scattering images as well as fluorescence emission images are computed from temporal profiles of diffuse photons. This method should improve the spatial resolution and the quantification of fluorescence signals. We used the diffusion approximation of the radiation transport equation and the finite element method to solve the forward problem. The inverse problem is solved with an optimization algorithm such as ART or conjugate gradient. The scanner and its performances are presented, together with absorption, scattering and fluorescent images obtained with it

  9. Aespoe Hard Rock Laboratory. Characterisation methods and instruments. Experiences from the construction phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-12-15

    This report describes the different investigation methods used during the Aespoe HRL construction phase which commenced 1990 and ended 1995. The investigation methods are described with respect to performance, errors, uncertainty and usefulness in determined, analysed and/or calculated parameter values or other kind of geoscientific information. Moreover, other comments of the different methods, like those related to the practical performance of the measurements or tests are given. The practical performance is a major task as most of the investigations were conducted in parallel with the construction work. Much of the wide range of investigations carried out during the tunnelling work required special efforts of the personnel involved. Experiences and comments on these operations are presented in the report. The pre-investigation methods have been evaluated by comparing predictions based on pre-investigation models with data and results from the construction phase and updated geoscientific models. In 1997 a package of reports describe the general results of the pre-investigations. The investigation methods are in this report evaluated with respect to usefulness for underground characterisation of a rock volume, concerning geological, geohydrological, hydrochemical and rock mechanical properties. The report describes out opinion of the methods after the construction phase, i.e. the same platform of knowledge as for the package of reports of 1997. The evaluation of usefulness of the underground investigation methods are structured according to the key issues used for the preinvestigation modelling and predictions, i.e. Geological-structural model, Groundwater flow (hydrogeology), Groundwater chemistry (hydrochemistry), Transport of solutes and Mechanical stability models (or rock mechanics). The investigation methods selected for the different subjects for which the predictions were made are presented. Some of the subjects were slightly modified or adjusted during

  10. Modeling the solute transport by particle-tracing method with variable weights

    Science.gov (United States)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  11. New teaching methods in use at UC Irvine's optical engineering and instrument design programs

    Science.gov (United States)

    Silberman, Donn M.; Rowe, T. Scott; Jo, Joshua; Dimas, David

    2012-10-01

    New teaching methods reach geographically dispersed students with advances in Distance Education. Capabilities include a new "Hybrid" teaching method with an instructor in a classroom and a live WebEx simulcast for remote students. Our Distance Education Geometric and Physical Optics courses include Hands-On Optics experiments. Low cost laboratory kits have been developed and YouTube type video recordings of the instructor using these tools guide the students through their labs. A weekly "Office Hour" has been developed using WebEx and a Live Webcam the instructor uses to display his live writings from his notebook for answering students' questions.

  12. Elastic Stress Analysis of Rotating Functionally Graded Annular Disk of Variable Thickness Using Finite Difference Method

    Directory of Open Access Journals (Sweden)

    Mohammad Hadi Jalali

    2018-01-01

    Full Text Available Elastic stress analysis of rotating variable thickness annular disk made of functionally graded material (FGM is presented. Elasticity modulus, density, and thickness of the disk are assumed to vary radially according to a power-law function. Radial stress, circumferential stress, and radial deformation of the rotating FG annular disk of variable thickness with clamped-clamped (C-C, clamped-free (C-F, and free-free (F-F boundary conditions are obtained using the numerical finite difference method, and the effects of the graded index, thickness variation, and rotating speed on the stresses and deformation are evaluated. It is shown that using FG material could decrease the value of radial stress and increase the radial displacement in a rotating thin disk. It is also demonstrated that increasing the rotating speed can strongly increase the stress in the FG annular disk.

  13. Use of instrumental nuclear activation methods in the study of particles from major air pollution sources

    International Nuclear Information System (INIS)

    Gordon, G.E.; Zoller, W.H.; Gladney, E.S.; Greenberg, R.R.

    1974-01-01

    Nuclear methods have been used effectively in the study of particles emitted by a coal-fired power plant and a municipal incinerator. In the coal-fired plant there is appreciable fractionation of only five of the observed elements. By contrast, particles from the incinerator are highly enriched in several trace elements

  14. Nuclear medicine and imaging research. Instrumentation and quantitative methods of evaluation. Progress report, January 15, 1985-January 14, 1986

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1985-09-01

    This program of research addresses problems involving the basic science and technology of radioactive tracer methods as they relate to nuclear medicine and imaging. The broad goal is to develop new instruments and methods for image formation, processing, quantitation, and display, so as to maximize the diagnostic information per unit of absorbed radiation dose to the patient. These developments are designed to meet the needs imposed by new radiopharmaceuticals developed to solve specific biomedical problems, as well as to meet the instrumentation needs associated with radiopharmaceutical production and quantitative clinical feasibility studies of the brain with PET VI. Project I addresses problems associated with the quantitative imaging of single-photon emitters; Project II addresses similar problems associated with the quantitative imaging of positron emitters; Project III addresses methodological problems associated with the quantitative evaluation of the efficacy of diagnostic imaging procedures. The original proposal covered work to be carried out over the three-year contract period. This report covers progress made during Year Three. 36 refs., 1 tab

  15. Higuchi’s Method applied to detection of changes in timbre of digital sound synthesis of string instruments with the functional transformation method

    Science.gov (United States)

    Kanjanapen, Manorth; Kunsombat, Cherdsak; Chiangga, Surasak

    2017-09-01

    The functional transformation method (FTM) is a powerful tool for detailed investigation of digital sound synthesis by the physical modeling method, the resulting sound or measured vibrational characteristics at discretized points on real instruments directly solves the underlying physical effect of partial differential equation (PDE). In this paper, we present the Higuchi’s method to examine the difference between the timbre of tone and estimate fractal dimension of musical signals which contains information about their geometrical structure that synthesizes by FTM. With the Higuchi’s method we obtain the whole process is not complicated, fast processing, with the ease of analysis without expertise in the physics or virtuoso musicians and the easiest way for the common people can judge that sounds similarly presented.

  16. Lung lesion doubling times: values and variability based on method of volume determination

    International Nuclear Information System (INIS)

    Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory

    2008-01-01

    Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs

  17. A variable pressure method for characterizing nanoparticle surface charge using pore sensors.

    Science.gov (United States)

    Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff

    2012-04-03

    A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.

  18. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  19. A new instrumental method for the analysis of rare earth elements

    International Nuclear Information System (INIS)

    Santos, A.N. dos.

    1975-01-01

    A method for the simultaneous elemental analysis of the rare earths is proposed and empirically verified. It is based on the analysis of the escape peaks, generated by the characteristic X-rays of these elements in a xenon proportional counter. The peaks are well resolved and intense, in contrast to the photopeak which is lost in the background. The spectra are generated by a radioisotope such as Co 57 , and the equipment is simple, portable and low cost, although resolution challenges that of the best solid state detectors. Since X-rays are utilized, matrix, granulometric or mineralogical effects are minimal, and the method is rapid, sensitive, non-destructive and requires little or no sample preparation. The results are preliminary and an improvement in resolution of up to fourfold seems possible; precision is better than 0,1% in concentrated samples and sensitivity is about 20 μg

  20. INSTRUMENTS AND METHODS OF INVESTIGATION: Positron annihilation spectroscopy in materials structure studies

    Science.gov (United States)

    Grafutin, Viktor I.; Prokop'ev, Evgenii P.

    2002-01-01

    A relatively new method of materials structure analysis — positron annihilation spectroscopy (PAS) — is reviewed. Measurements of positron lifetimes, the determination of positron 3γ- and 2γ-annihilation probabilities, and an investigation of the effects of different external factors on the fundamental characteristics of annihilation constitute the basis for this promising method. The ways in which the positron annihilation process operates in ionic crystals, semiconductors, metals and some condensed matter systems are analyzed. The scope of PAS is described and its prospects for the study of the electronic and defect structures are discussed. The applications of positron annihilation spectroscopy in radiation physics and chemistry of various substances as well as in physics and chemistry of solutions are exemplified.

  1. Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.

    Science.gov (United States)

    Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F

    2015-05-01

    Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method

  2. The Place of Nailfold Capillaroscopy Among Instrumental Methods for Assessment of Some Peripheral Ischaemic Syndromes in Rheumatology.

    Science.gov (United States)

    Lambova, Sevdalina N

    2016-01-01

    Micro- and macrovascular pathology is a frequent finding in a number of common rheumatic diseases. Secondary Raynaud's phenomenon (RP) is among the most common symptoms in systemic sclerosis and several other systemic autoimmune diseases including a broad differential diagnosis. It should be also differential from other peripheral vascular syndromes such as embolism, thrombosis, etc., some of which lead to clinical manifestation of the blue toe syndrome. The current review discusses the instrumental methods for vascular assessments. Nailfold capillaroscopy is the only method among the imaging techniques that can be used for morphological assessment of the nutritive capillaries in the nailfold area. Laser-Doppler flowmetry and laser-Doppler imaging are methods for functional assessment of microcirculation, while thermography and plethysmography reflect both blood flow in peripheral arteries and microcirculation. Doppler ultrasound and angiography visualize peripheral arteries. The choice of the appropriate instrumental method is guided by the clinical presentation. The main role of capillaroscopy is to provide differential diagnosis between primary and secondary RP. In rheumatology, capillaroscopic changes in systemic sclerosis have been recently defined as diagnostic. The appearance of abnormal capillaroscopic pattern inherits high positive predictive value for the development of a connective tissue disease that is higher than the predictive value of antinuclear antibodies. In cases of abrupt onset of peripheral ischaemia, clinical signs of critical ischaemia, unilateral or lower limb involvement, Doppler ultrasound and angiography are indicated. The most common causes for such clinical picture that may be referred to rheumatologic consultation are the antiphospholipid syndrome, mimickers of vasculitides such as atherosclerosis with cholesterol emboli, and neoplasms.

  3. The Effect of 4-week Difference Training Methods on Some Fitness Variables in Youth Handball Players

    Directory of Open Access Journals (Sweden)

    Abdolhossein a Parnow

    2016-09-01

    Full Text Available Handball is a team sport in which main activities such as sprinting, arm throwing, hitting, and so on involve. This Olympic team sport requires a standard of preparation in order to complete sixteen minutes of competitive play and to achieve success. This study, therefore, was done to determinate the effect of a 4-week different training on some physical fitness variables in youth Handball players. Thirty high-school students participated in the study and assigned into the Resistance Training (RT (n = 10: 16.75± 0.36 yr; 63.14± 4.19 kg; 174.8 ± 5.41 cm, Plyometric Training (PT (n = 10: 16.57± 0.26 yr; 65.52± 6.79 kg; 173.5 ± 5.44 cm, and Complex Training (CT (n=10, 16.23± 0.50 yr; 58.43± 10.50 kg; 175.2 ± 8.19 cm groups. Subjects were evaluated in anthropometric and physiological characteristics 48 hours before and after of a 4-week protocol. Because of study purposes, statistical analyses consisted of a repeated measure ANVOA and one-way ANOVA were used. In considering with pre to post test variables changes in the groups, data analysis showed BF, strength, speed, agility, and explosive power were affected by training protocols (P0.05. In conclusion, complex training result in advantageous effect on variables such as strength, explosive power, speed and agility in youth handball players compare with resistance and plyometric training although we also reported positive effect of these training methods. Coaches and players, therefore, could consider complex training as alternative method for other training methods.

  4. The use of instrumental neutron activation analysis method in bio-sorption determination

    International Nuclear Information System (INIS)

    Khamidova, Kh.M.; Mutavalieva, Z.S.; Muchamedshina, N.M.; Mirzagatova, A.A.

    2005-01-01

    Full text: Recently, much attention is paid to the research and development of effective metal remediation methods. In industry, for the removal of metals from the industrial solutions and wastes, the expensive ion-exchange resin method of metal sorption is used today. The microbiological methods are much less expensive, are available and provide its application in natural conditions. The search for molybdenum bio sorbent was performed amongst Actinomyces strains. The 18 of Streptomyces strains were used. The data showed that all investigated strains uptake the molybdenum from the solution in various degrees. The molybdenum determination was performed using neutron activation analysis technique. In a nuclear reactor, the samples were treated with a steady flow of neutrons (5.1·10 13 ) n·cm -2 sec -1 in 20 hours. The samples were stored for 6-7 days before analysis. The Actinomyces biomass uptake capacity was up to 94.5 %. The 8 cultures have the most high uptake capacity that varied from 87.4 to 94.5 %. Streptomyces sp. 39 and Streptomyces sp.32 have the lowest bio-sorption capacity amongst studied strains, which was 46.6% and 40 % respectively, whereas the bio sorption capacity of other cultures varied from 55.8 to 64.1%. The influence of some physical and chemical parameters (culture age, pH, temperature) on molybdenum bio-sorption was studied. Data showed that the change in pH, temperature and cultivation period lead to the increase of bio-sorption capacity

  5. Control system and method for a power delivery system having a continuously variable ratio transmission

    Science.gov (United States)

    Frank, Andrew A.

    1984-01-01

    A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.

  6. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  7. Fourier transform methods for calculating action variables and semiclassical eigenvalues for coupled oscillator systems

    International Nuclear Information System (INIS)

    Eaker, C.W.; Schatz, G.C.; De Leon, N.; Heller, E.J.

    1984-01-01

    Two methods for calculating the good action variables and semiclassical eigenvalues for coupled oscillator systems are presented, both of which relate the actions to the coefficients appearing in the Fourier representation of the normal coordinates and momenta. The two methods differ in that one is based on the exact expression for the actions together with the EBK semiclassical quantization condition while the other is derived from the Sorbie--Handy (SH) approximation to the actions. However, they are also very similar in that the actions in both methods are related to the same set of Fourier coefficients and both require determining the perturbed frequencies in calculating actions. These frequencies are also determined from the Fourier representations, which means that the actions in both methods are determined from information entirely contained in the Fourier expansion of the coordinates and momenta. We show how these expansions can very conveniently be obtained from fast Fourier transform (FFT) methods and that numerical filtering methods can be used to remove spurious Fourier components associated with the finite trajectory integration duration. In the case of the SH based method, we find that the use of filtering enables us to relax the usual periodicity requirement on the calculated trajectory. Application to two standard Henon--Heiles models is considered and both are shown to give semiclassical eigenvalues in good agreement with previous calculations for nondegenerate and 1:1 resonant systems. In comparing the two methods, we find that although the exact method is quite general in its ability to be used for systems exhibiting complex resonant behavior, it converges more slowly with increasing trajectory integration duration and is more sensitive to the algorithm for choosing perturbed frequencies than the SH based method

  8. A direct method for calculating instrument noise levels in side-by-side seismometer evaluations

    Science.gov (United States)

    Holcomb, L. Gary

    1989-01-01

    The subject of determining the inherent system noise levels present in modem broadband closed loop seismic sensors has been an evolving topic ever since closed loop systems became available. Closed loop systems are unique in that the system noise can not be determined via a blocked mass test as in older conventional open loop seismic sensors. Instead, most investigators have resorted to performing measurements on two or more systems operating in close proximity to one another and to analyzing the outputs of these systems with respect to one another to ascertain their relative noise levels.The analysis of side-by-side relative performance is inherently dependent on the accuracy of the mathematical modeling of the test configuration. This report presents a direct approach to extracting the system noise levels of two linear systems with a common coherent input signal. The mathematical solution to the problem is incredibly simple; however the practical application of the method encounters some difficulties. Examples of expected accuracies are presented as derived by simulating real systems performance using computer generated random noise. In addition, examples of the performance of the method when applied to real experimental test data are shown.

  9. Rapid instrumental and separation methods for monitoring radionuclides in food and environmental samples. Progress report

    International Nuclear Information System (INIS)

    Bhat, I.S.; Shukla, V.K.; Singh, A.N.; Nair, C.K.G.; Hingorani, S.B.; Dey, N.N.; Jha, S.K.; Rao, D.D.

    1995-01-01

    When activity levels are low, the direct gamma counting of milk and water samples take very long time, initial concentration step increases the sensitivity. 131 I in aqueous samples can be concentrated by absorption on AgCI in acidic condition. In case of milk, initial treatment with TCA, separation of precipitated casin and stirring the acidified (dil. HNO 3 ) clear solution with about 500 mg AgCI gives all the 131 I (more than 95%) picked up by AgCI which can be counted in a well crystal gamma spectrometer. In case of water samples acidification and direct stirring with AgCI all 131 I gets absorbed on to AgCI. About half an hour stirring has been found sufficient to give reproducible result. The total time required will be about 3 hrs. In case of 137 Cs, the aqueous solution should be stirred with ammonium phosphomolybdate (AMP) after acidification with HNO 3 . After an hour of AMP settling time, decantation, filtration and centrifuging one can get the AMP ready for counting in a gamma spectrometer having a well type detector. The analysis can be completed within 2 hrs. AgCI concentration of 131 I and AMP concentration of 137 Cs reduces the counting time significantly. These methods have been used for sea water and milk samples analysis. Methods are being standardised for solvent extraction separation of Pu, Am and Cm from preconcentrated environmental samples and direct counting of organic extract by liquid scintillation counting. For Pu determination, solvent extraction by TTA, back extraction and reextraction to 5% D2EHPA and direct liquid scintillation counting of Pu-alphas is planned. This will reduce the time required for Pu analysis to a significant extent. After bringing the sample to solution, this separation step can be carried out within 1 1/2 to 2 hrs. With Instagel scintillator cocktail in the packard 1550 LSS, Pu-239 counting had 70% efficiency with 5.3 cpm background. Pu-239 estimated in a few sediment sample gave results by both LSS method and Si

  10. Method of production of a diaphragm for instruments in particle optics and diaphragm fabricated by this method

    International Nuclear Information System (INIS)

    Sandrik, J.; Krohne, P.

    1975-01-01

    The production method of, e.g., a circular diaphragm for an electron microscope is based on copper plate as supporting material. A light-sensitive, electrically insulating layer is coated on this. After exposing and freeing the positions of this layer, e.g., the circular interior as well as the cross-piece to the exterior of the diaghragm, a galvanic building-up of a noble metal layer follows, e.g. gold, on these now free positions. After freeing the remaining non-exposed material, an etching-protective lacquer is coated on the positions of the supporting material which are to be maintained. The remaining parts of the supporting material are then removed by positive etching. (DG/LH) [de

  11. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1984-09-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  12. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1985-01-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  13. Optimization and development of the instrumental parameters for a method of multielemental analysis through atomic spectroscopy emission, for the determination of My, Fe Mn and Cr

    International Nuclear Information System (INIS)

    Lanzoni Vindas, E.

    1998-01-01

    This study optimized the instrumental parameters of a method of multielemental (sequential) analysis, through atomic emission, for the determination of My, Fe,Mn and Cr. It used the factorial design at two levels and the method of Simplex optimization, that permitted the determination of the four cations under the same instrumental conditions. The author studied an analytic system, in which the conditions were not lineal between instrumental answers and the concentration, having to make adjustment of the calibration curves in homocedastic and heterocedastic conditions. (S. Grainger)

  14. Quantitative assessment of probability of failing safely for the safety instrumented system using reliability block diagram method

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Zhao, Shoutang; Hu, Bin

    2015-01-01

    Highlights: • Models of PFS for SIS were established by using the reliability block diagram. • The more accurate calculation of PFS for SIS can be acquired by using SL. • Degraded operation of complex SIS does not affect the availability of SIS. • The safe undetected failure is the largest contribution to the PFS of SIS. - Abstract: The spurious trip of safety instrumented system (SIS) brings great economic losses to production. How to ensure the safety instrumented system is reliable and available has been put on the schedule. But the existing models on spurious trip rate (STR) or probability of failing safely (PFS) are too simplified and not accurate, in-depth studies of availability to obtain more accurate PFS for SIS are required. Based on the analysis of factors that influence the PFS for the SIS, using reliability block diagram method (RBD), the quantitative study of PFS for the SIS is carried out, and gives some application examples. The results show that, the common cause failure will increase the PFS; degraded operation does not affect the availability of the SIS; if the equipment was tested and repaired one by one, the unavailability of the SIS can be ignored; the corresponding occurrence time of independent safe undetected failure should be the system lifecycle (SL) rather than the proof test interval and the independent safe undetected failure is the largest contribution to the PFS for the SIS

  15. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  16. Gas permeation measurement under defined humidity via constant volume/variable pressure method

    KAUST Repository

    Jan Roman, Pauls

    2012-02-01

    Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.

  17. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    Science.gov (United States)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  18. Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 3, Inorganic instrumental methods

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    The methods cover: C in solutions, F (electrode), elements by atomic emission spectrometry, inorganic anions by ion chromatography, Hg in water/solids/sludges, As, Se, Bi, Pb, data calculations for SST (single shell tank?) samples, Sb, Tl, Ag, Pu, O/M ratio, ignition weight loss, pH value, ammonia (N), Cr(VI), alkalinity, U, C sepn. from soil/sediment/sludge, Pu purif., total N, water, C and S, surface Cl/F, leachable Cl/F, outgassing of Ge detector dewars, gas mixing, gas isotopic analysis, XRF of metals/alloys/compounds, H in Zircaloy, H/O in metals, inpurity extraction, reduced/total Fe in glass, free acid in U/Pu solns, density of solns, Kr/Xe isotopes in FFTF cover gas, H by combustion, MS of Li and Cs isotopes, MS of lanthanide isotopes, GC operation, total Na on filters, XRF spectroscopy QC, multichannel analyzer operation, total cyanide in water/solid/sludge, free cyanide in water/leachate, hydrazine conc., ICP-MS, {sup 99}Tc, U conc./isotopes, microprobe analysis of solids, gas analysis, total cyanide, H/N{sub 2}O in air, and pH in soil.

  19. In-core Instrument Subcritical Verification (INCISV) - Core Design Verification Method - 358

    International Nuclear Information System (INIS)

    Prible, M.C.; Heibel, M.D.; Conner, S.L.; Sebastiani, P.J.; Kistler, D.P.

    2010-01-01

    According to the standard on reload startup physics testing, ANSI/ANS 19.6.1, a plant must verify that the constructed core behaves sufficiently close to the designed core to confirm that the various safety analyses bound the actual behavior of the plant. A large portion of this verification must occur before the reactor operates at power. The INCISV Core Design Verification Method uses the unique characteristics of a Westinghouse Electric Company fixed in-core self powered detector design to perform core design verification after a core reload before power operation. A Vanadium self powered detector that spans the length of the active fuel region is capable of confirming the required core characteristics prior to power ascension; reactivity balance, shutdown margin, temperature coefficient and power distribution. Using a detector element that spans the length of the active fuel region inside the core provides a signal of total integrated flux. Measuring the integrated flux distributions and changes at various rodded conditions and plant temperatures, and comparing them to predicted flux levels, validates all core necessary core design characteristics. INCISV eliminates the dependence on various corrections and assumptions between the ex-core detectors and the core for traditional physics testing programs. This program also eliminates the need for special rod maneuvers which are infrequently performed by plant operators during typical core design verification testing and allows for safer startup activities. (authors)

  20. Study of input variables in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2013-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)

  1. Comparative performance of different stochastic methods to simulate drug exposure and variability in a population.

    Science.gov (United States)

    Tam, Vincent H; Kabbara, Samer

    2006-10-01

    Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.

  2. A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy

    Directory of Open Access Journals (Sweden)

    Yongxin Chou

    2017-01-01

    Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.

  3. Variability of bronchial measurements obtained by sequential CT using two computer-based methods

    International Nuclear Information System (INIS)

    Brillet, Pierre-Yves; Fetita, Catalin I.; Mitrea, Mihai; Preteux, Francoise; Capderou, Andre; Dreuil, Serge; Simon, Jean-Marc; Grenier, Philippe A.

    2009-01-01

    This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm 2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between -1.59 and 1.5 mm 2 for LA, and -3.31 and 2.96 mm 2 for WA. The values of the coefficient of measurement variation (CV 10 , i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p 2 , whereas WA values were lower for bronchi with WA 2 ; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV 10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies. (orig.)

  4. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  5. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  6. Developing a multipoint titration method with a variable dose implementation for anaerobic digestion monitoring.

    Science.gov (United States)

    Salonen, K; Leisola, M; Eerikäinen, T

    2009-01-01

    Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.

  7. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  8. Instrumentation and calibration methods for the multichannel measurement of phase and amplitude in optical tomography

    International Nuclear Information System (INIS)

    Nissilae, Ilkka; Noponen, Tommi; Kotilahti, Kalle; Katila, Toivo; Lipiaeinen, Lauri; Tarvainen, Tanja; Schweiger, Martin; Arridge, Simon

    2005-01-01

    In this article, we describe the multichannel implementation of an intensity modulated optical tomography system developed at Helsinki University of Technology. The system has two time-multiplexed wavelengths, 16 time-multiplexed source fibers and 16 parallel detection channels. The gain of the photomultiplier tubes (PMTs) is individually adjusted during the measurement sequence to increase the dynamic range of the system by 10 4 . The PMT used has a high quantum efficiency in the near infrared (8% at 800 nm), a fast settling time, and low hysteresis. The gain of the PMT is set so that the dc anode current is below 80 nA, which allows the measurement of phase independently of the intensity. The system allows measurements of amplitude at detected intensities down to 1 fW, which is sufficient for transmittance measurements of the female breast, the forearm, and the brain of early pre-term infants. The mean repeatability of phase and the logarithm of amplitude (ln A) at 100 MHz were found to be 0.08 deg. and 0.004, respectively, in a measurement of a 7 cm phantom with an imaging time of 5 s per source and source optical power of 8 mW. We describe a three-step method of calibrating the phase and amplitude measurements so that the absolute absorption and scatter in tissue may be measured. A phantom with two small cylindrical targets and a second phantom with three rods are measured and reconstructions made from the calibrated data are shown and compared with reconstructions from simulated data

  9. D.P.M. METHOD - A PERFORMANCE ANALYSIS INSTRUMENT OF A STRATEGIC BUSINESS UNIT

    Directory of Open Access Journals (Sweden)

    Ionescu Florin Tudor

    2012-12-01

    Full Text Available Considering the uncertain economic conditions, the market dynamics, the fundamental changes in the attitudes and aspirations of the consumers along with the strong growth of the political role and interventions in the economy, currently characterizing both Romania and other countries of the world, it can be said that the need for strategic planning was never so acute as now. The strategic planning process is an ongoing organizational activity by which managers can make decisions about their present and future position. A number of analytical portfolio tools exist to aid managers in the formulation of the strategy. The use of these tools within the broader context of the overall strategic planning process allows managers to determine the obstacles and opportunities existing in the company’s environment and to define and pursue appropriate strategies for growth and profitability. The present paper aims to highlight from a theoretical standpoint the D.P.M. method, its strategic consequences, advantages and disadvantages. After conducting this analysis I have found that restricting the business portfolio analysis to the D.P.M. matrix is not a very wise decision. The D.P.M. matrix among with other marketing tools of business portfolio analysis have some advantages and disadvantages and is trying to provide, at a time, a specific diagnosis of a company’s business portfolio. Therefore, the recommendation for the Romanian managers consists in a combined use of a wide range of tools and techniques for business portfolio analysis. This leads to a better understanding of the whole mix of product markets, included in portfolio analysis, the strategic position held by each business within a market, the performance potential of business portfolio and the financial aspects related to the resource allocation process for the businesses within the portfolio. It should also be noted that the tools and techniques specific to business portfolio

  10. Multielement analysis of human hair and kidney stones by instrumental neutron activation analysis with the k0-standardization method

    International Nuclear Information System (INIS)

    Abugassa, I.; Sarmani, S.B.; Samat, S.B.

    1999-01-01

    This paper focuses on the evaluation of the k 0 method of instrumental neutron activation analysis in biological materials. The method has been applied in multielement analysis of human hair standard reference materials from IAEA, No. 085, No. 086 and from NIES (National Institute for Environmental Sciences) No. 5. Hair samples from people resident in different parts of Malaysia, in addition to a sample from Japan, were analyzed. In addition, human kidney stones from members of the Malaysian population have been analyzed for minor and trace elements. More than 25 elements have been determined. The samples were irradiated in the rotary rack (Lazy Susan) at the TRIGA Mark II reactor of the Malaysian Institute for Nuclear Technology and Research (MINT). The accuracy of the method was ascertained by analysis of other reference materials, including 1573 tomato leaves and 1572 citrus leaves. In this method the deviation of the 1/E 1+α epithermal neutron flux distribution from the 1/E law (P/T ratio) for true coincidence effects of the γ-ray cascade and the HPGe detector efficiency were determined and corrected for

  11. Reviews in Modern Astronomy 12, Astronomical Instruments and Methods at the turn of the 21st Century

    Science.gov (United States)

    Schielicke, Reinhard E.

    The yearbook series Reviews in Modern Astronomy of the Astronomische Gesellschaft (AG) was established in 1988 in order to bring the scientific events of the meetings of the society to the attention of the worldwide astronomical community. Reviews in Modern Astronomy is devoted exclusively to the invited Reviews, the Karl Schwarzschild Lectures, the Ludwig Biermann Award Lectures, and the highlight contributions from leading scientists reporting on recent progress and scientific achievements at their respective research institutes. Volume 12 continues the yearbook series with 16 contributions which were presented during the International Scientific Conference of the AG on ``Astronomical Instruments and Methods at the Turn of the 21st Century'' at Heidelberg from September 14 to 19, 1998

  12. Element distribution study of drinking water and well sediments using the method of instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Vircavs, M.; Taure, I.; Eglite, G.; Brike, Z.

    1996-01-01

    The method of instrumental activation analysis was used to estimate the distribution of major, minor and trace elements in well sediments, Riga tap water and well water used for drinking and for preparation of food. The chemical composition of drinking water (tap and well water) varies considerably in different districts of Riga and in different wells. The greatest concentration differences for Zn, Fe and Al are observed in tap water. Median concentrations of determined elements are smaller than maximum permissible concentrations (MPC). However, in some cases the concentration of Al and Fe higher than their MPC for tap water. The highest concentration ratios were observed for Ti, Cr and Zn in well sediments. (author). 19 refs, 2 tabs

  13. Study of Material Moisture Measurement Method and Instrument by the Combination of Fast Neutron Absorption and γ Absorption

    International Nuclear Information System (INIS)

    Hou Chaoqin; Gong Yalin; Zhang Wei; Shang Qingmin; Li Yanfeng; Gou Qiangyuan; Yin Deyou

    2010-01-01

    To solve the problem of on-line sinter moisture measurement in the iron making plant, we developed material moisture measurement method and instrument by the combination of fast neutron absorption and y-absorption. It overcomes the present existed problems of other moisture meters for the sinter. Compare with microwave moisture meter, the measurement dose not affected by conductance and magnetism of material; to infrared moisture meter, the measurement result dose not influenced by colour and light-reflect performance of material surface, dose not influenced by changes of material kind; to slow neutron scatter moisture meter, the measurement dose not affected by density of material and thickness of hopper wall; to the moisture measurement meter which combined by slow neutron penetrate through and y-absorption, there are definite math model and good linear relation between the measurement values, and the measurement dose not affected by material thickness, changes of material form and component. (authors)

  14. Method for evaluating the system instrumentation for loose part detection in the primary cooling circuit of French PWRs

    International Nuclear Information System (INIS)

    Gerardin, J.P.; Donnette, J.E.

    1995-05-01

    The purpose of the loose part detection system is to trigger an alarm whenever it is warranted, to localize, and to provide information on the type of loose part involved and the damages it may provoke. It is therefore indispensable to have efficient instrumentation, beginning with the sensors which must provide us with a response to all mechanical impacts in natural trapping areas (reactor vessel and steam generator water box). A series of mass- and energy-calibrated impacts have been generated on 45 points in the primary cooling system of a nuclear plant unit in the startup phase. This test provided insights into the relationship between sensor signals and various impact parameters such as velocity of impact or loose part mass. Once these parameters were known, it was possible to define a method for evaluating the detection threshold of sensors depending on the way they are mounted. (author)

  15. The extended wedge method: atomic force microscope friction calibration for improved tolerance to instrument misalignments, tip offset, and blunt probes.

    Science.gov (United States)

    Khare, H S; Burris, D L

    2013-05-01

    One of the major challenges in understanding and controlling friction is the difficulty in bridging the length and time scales of macroscale contacts and those of the single asperity interactions they comprise. While the atomic force microscope (AFM) offers a unique ability to probe tribological surfaces in a wear-free single-asperity contact, instrument calibration challenges have limited the usefulness of this technique for quantitative nanotribological studies. A number of lateral force calibration techniques have been proposed and used, but none has gained universal acceptance due to practical considerations, configuration limitations, or sensitivities to unknowable error sources. This paper describes a simple extension of the classic wedge method of AFM lateral force calibration which: (1) allows simultaneous calibration and measurement on any substrate, thus eliminating prior tip damage and confounding effects of instrument setup adjustments; (2) is insensitive to adhesion, PSD cross-talk, transducer/piezo-tube axis misalignment, and shear-center offset; (3) is applicable to integrated tips and colloidal probes; and (4) is generally applicable to any reciprocating friction coefficient measurement. The method was applied to AFM measurements of polished carbon (99.999% graphite) and single crystal MoS2 to demonstrate the technique. Carbon and single crystal MoS2 had friction coefficients of μ = 0.20 ± 0.04 and μ = 0.006 ± 0.001, respectively, against an integrated Si probe. Against a glass colloidal sphere, MoS2 had a friction coefficient of μ = 0.005 ± 0.001. Generally, the measurement uncertainties ranged from 10%-20% and were driven by the effect of actual frictional variation on the calibration rather than calibration error itself (i.e., due to misalignment, tip-offset, or probe radius).

  16. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Instrumentation development

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    Areas being investigated for instrumentation improvement during low-level pollution monitoring include laser opto-acoustic spectroscopy, x-ray fluorescence spectroscopy, optical fluorescence spectroscopy, liquid crystal gas detectors, advanced forms of atomic absorption spectroscopy, electro-analytical chemistry, and mass spectroscopy. Emphasis is also directed toward development of physical methods, as opposed to conventional chemical analysis techniques for monitoring these trace amounts of pollution related to energy development and utilization

  18. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method.

    Directory of Open Access Journals (Sweden)

    Haoshi Zhang

    Full Text Available The analysis of heart rate variability (HRV has been performed on long-term electrocardiography (ECG recordings (12~24 hours and short-term recordings (2~5 minutes, which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV. The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping. The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations.

  19. Disability as deprivation of capabilities: Estimation using a large-scale survey in Morocco and Tunisia and an instrumental variable approach.

    Science.gov (United States)

    Trani, Jean-Francois; Bakhshi, Parul; Brown, Derek; Lopez, Dominique; Gall, Fiona

    2018-05-25

    The capability approach pioneered by Amartya Sen and Martha Nussbaum offers a new paradigm to examine disability, poverty and their complex associations. Disability is hence defined as a situation in which a person with an impairment faces various forms of restrictions in functionings and capabilities. Additionally, poverty is not the mere absence of income but a lack of ability to achieve essential functionings; disability is consequently the poverty of capabilities of persons with impairment. It is the lack of opportunities in a given context and agency that leads to persons with disabilities being poorer than other social groups. Consequently, poverty of people with disabilities comprises of complex processes of social exclusion and disempowerment. Despite growing evidence that persons with disabilities face higher levels of poverty, the literature from low and middle-income countries that analyzes the causal link between disability and poverty, remains limited. Drawing on data from a large case control field survey carried out between December 24th , 2013 and February 16th , 2014 in Tunisia and between November 4th , 2013 and June 12th , 2014 in Morocco, we examined the effect of impairment on various basic capabilities, health related quality of life and multidimensional poverty - indicators of poor wellbeing-in Morocco and Tunisia. To demonstrate a causal link between impairment and deprivation of capabilities, we used instrumental variable regression analyses. In both countries, we found lower access to jobs for persons with impairment. Health related quality of life was also lower for this group who also faced a higher risk of multidimensional poverty. There was no significant direct effect of impairment on access to school and acquiring literacy in both countries, and on access to health care and expenses in Tunisia, while having an impairment reduced access to healthcare facilities in Morocco and out of pocket expenditures. These results suggest that

  20. VARIABILITY OF MANUAL AND COMPUTERIZED METHODS FOR MEASURING CORONAL VERTEBRAL INCLINATION IN COMPUTED TOMOGRAPHY IMAGES

    Directory of Open Access Journals (Sweden)

    Tomaž Vrtovec

    2015-06-01

    Full Text Available Objective measurement of coronal vertebral inclination (CVI is of significant importance for evaluating spinal deformities in the coronal plane. The purpose of this study is to systematically analyze and compare manual and computerized measurements of CVI in cross-sectional and volumetric computed tomography (CT images. Three observers independently measured CVI in 14 CT images of normal and 14 CT images of scoliotic vertebrae by using six manual and two computerized measurements. Manual measurements were obtained in coronal cross-sections by manually identifying the vertebral body corners, which served to measure CVI according to the superior and inferior tangents, left and right tangents, and mid-endplate and mid-wall lines. Computerized measurements were obtained in two dimensions (2D and in three dimensions (3D by manually initializing an automated method in vertebral centroids and then searching for the planes of maximal symmetry of vertebral anatomical structures. The mid-endplate lines were the most reproducible and reliable manual measurements (intra- and inter-observer variability of 0.7° and 1.2° standard deviation, SD, respectively. The computerized measurements in 3D were more reproducible and reliable (intra- and inter-observer variability of 0.5° and 0.7° SD, respectively, but were most consistent with the mid-wall lines (2.0° SD and 1.4° mean absolute difference. The manual CVI measurements based on mid-endplate lines and the computerized CVI measurements in 3D resulted in the lowest intra-observer and inter-observer variability, however, computerized CVI measurements reduce observer interaction.

  1. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  2. Method for the Analysis of Temporal Change of Physical Structure in the Instrumentation and Control Life-Cycle

    Energy Technology Data Exchange (ETDEWEB)

    Goering, Markus [Vattenfall Europe Nuclear Energy GmbH, Hamburg, (Germany); Fay, Alexander [Helmut Schmidt Univ., Hamburg (Germany)

    2013-10-15

    The design of computer-based instrumentation and control (I and C) systems is determined by the allocation of I and C functions to I and C systems and components. Due to the characteristics of computer-based technology, component failures can negatively affect several I and C functions, so that the reliability proof of the I and C systems requires the accomplishment of I and C system design analyses throughout the I and C life-cycle. On one hand, this paper proposes the restructuring of the sequential IEC 61513 I and C life-cycle according to the V-model, so as to adequately integrate the concept of verification and validation. On the other hand, based on a meta model for the modeling of I and C systems, this paper introduces a method for the modeling and analysis of the effects with respect to the superposition of failure combinations and event sequences on the I and C system design, i.e. the temporal change of physical structure is analyzed. In the first step, the method is concerned with the modeling of the I and C systems. In the second step, the method considers the analysis of temporal change of physical structure, which integrates the concepts of the diversity and defense-in-depth analysis, fault tree analysis, event tree analysis, and failure mode and effects analysis.

  3. Application of instrumental neutron activation analysis of uranium in burn-up measurements using. gamma. -ray spectrometric method

    Energy Technology Data Exchange (ETDEWEB)

    Chao, H E; Lu, W D

    1975-12-01

    In uranium burnup measurements, the amount of uranium in the irradiated sample needs to be determined, and the application of instrumental neutron activation analysis for this purpose is investigated. The method uses the gamma-ray activities of /sup 239/Np and some short-lived fission products of half-lives no longer than a few days to determine the quantities of /sup 238/U and /sup 235/U respectively. The advantages of the method include: (1) the amounts of both /sup 235/U and /sup 238/U of the sample can be simultaneously determined with good accuracy, (2) the same sample may be used to determine both the fission numbers and the amount of uranium remaining simultaneously or one after another, thus the exact amount of the sample is not necessarily known, (3) since the amount of the sample needed for the determination is usually small, i.e., about 10 ..mu..g, it should be easily handled even for high-level burnup samples. The error of the method is about 3 percent for a single measurement. The burnup values measured for an irradiated natural uranium sample from three aliquots using several fission products are in good agreement. The effective cross section for /sup 235/U deduced from the burnup and the integrated flux from a cobalt monitor is found to be 589 +- 19 barn which is in agreement with the literature value of 577 +- 1 barn.

  4. Method for the Analysis of Temporal Change of Physical Structure in the Instrumentation and Control Life-Cycle

    International Nuclear Information System (INIS)

    Goering, Markus; Fay, Alexander

    2013-01-01

    The design of computer-based instrumentation and control (I and C) systems is determined by the allocation of I and C functions to I and C systems and components. Due to the characteristics of computer-based technology, component failures can negatively affect several I and C functions, so that the reliability proof of the I and C systems requires the accomplishment of I and C system design analyses throughout the I and C life-cycle. On one hand, this paper proposes the restructuring of the sequential IEC 61513 I and C life-cycle according to the V-model, so as to adequately integrate the concept of verification and validation. On the other hand, based on a meta model for the modeling of I and C systems, this paper introduces a method for the modeling and analysis of the effects with respect to the superposition of failure combinations and event sequences on the I and C system design, i.e. the temporal change of physical structure is analyzed. In the first step, the method is concerned with the modeling of the I and C systems. In the second step, the method considers the analysis of temporal change of physical structure, which integrates the concepts of the diversity and defense-in-depth analysis, fault tree analysis, event tree analysis, and failure mode and effects analysis

  5. Validation of an analytical method for determining halothane in urine as an instrument for evaluating occupational exposure

    International Nuclear Information System (INIS)

    Gonzalez Chamorro, Rita Maria; Jaime Novas, Arelis; Diaz Padron, Heliodora

    2010-01-01

    The occupational exposure to harmful substances may impose the apparition of determined significative changes in the normal physiology of the organism when the adequate security measures are not taken in time in a working place where the risk may be present. Among the chemical risks that may affect the workers' health are the inhalable anesthetic agents. With the objective to take the first steps for the introduction of an epidemiological surveillance system to this personnel, an analytical method for determining this anesthetic in urine was validated with the instrumental conditions created in our laboratory. To carry out this validation the following parameters were taken into account: specificity, lineament, precision, accuracy, detection limit and quantification limit, and the uncertainty of the method was calculated. In the validation procedure it was found that the technique is specific and precise, the detection limit was of 0,118 μg/L, and of the quantification limit of 0,354 μg/L. The global uncertainty was of 0,243, and the expanded of 0,486. The validated method, together with the posterior introduction of the biological exposure limits, will serve as an auxiliary means of diagnosis which will allow us a periodical control of the personnel exposure

  6. Variable separation solutions for the Nizhnik-Novikov-Veselov equation via the extended tanh-function method

    International Nuclear Information System (INIS)

    Zhang Jiefang; Dai Chaoqing; Zong Fengde

    2007-01-01

    In this paper, with the variable separation approach and based on the general reduction theory, we successfully generalize this extended tanh-function method to obtain new types of variable separation solutions for the following Nizhnik-Novikov-Veselov (NNV) equation. Among the solutions, two solutions are new types of variable separation solutions, while the last solution is similar to the solution given by Darboux transformation in Hu et al 2003 Chin. Phys. Lett. 20 1413

  7. SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD

    Science.gov (United States)

    Krogh, F. T.

    1994-01-01

    The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.

  8. Optimization method to determine mass transfer variables in a PWR crud deposition risk assessment tool

    International Nuclear Information System (INIS)

    Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny

    2016-01-01

    Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)

  9. Sources of variability in the determination by evaporation method of gross alpha activity in water samples

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, A.; Corbacho, J.A. [LARUEX, Caceres (Spain). Environmental Radioactivity Lab.

    2013-07-01

    Determining the gross alpha activity concentration of water samples is one way to screen for waters whose radionuclide content is so high that its consumption could imply surpassing the Total Indicative Dose as defined in European Directive 98/83/EC. One of the most commonly used methods to prepare the sources to measure gross alpha activity in water samples is desiccation. Its main advantages are the simplicity of the procedure, the low cost of source preparation, and the possibility of simultaneously determining the gross beta activity. The preparation of the source, the construction of the calibration curves, and the measurement procedure itself involve, however, various factors that may introduce sufficient variability into the results to significantly affect the screening process. We here identify the main sources of this variability, and propose specific procedures to follow in the desiccation process that will reduce the uncertainties, and ensure that the result is indeed representative of the sum of the activities of the alpha emitters present in the sample. (orig.)

  10. LandScape: a simple method to aggregate p--Values and other stochastic variables without a priori grouping

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie

    2016-01-01

    variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...

  11. Instrumental methods in electrochemistry

    CERN Document Server

    Pletcher, D; Peat, R

    2010-01-01

    Using 372 references and 211 illustrations, this book underlines the fundamentals of electrochemistry essential to the understanding of laboratory experiments. It treats not only the fundamental concepts of electrode reactions, but also covers the methodology and practical application of the many versatile electrochemical techniques available.Underlines the fundamentals of electrochemistry essential to the understanding of laboratory experimentsTreats the fundamental concepts of electrode reactionsCovers the methodology and practical application of the many ve

  12. Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control

    Science.gov (United States)

    Nguyen, Nhan T. (Inventor)

    2016-01-01

    An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.

  13. [Heart rate variability as a method of assessing the autonomic nervous system in polycystic ovary syndrome].

    Science.gov (United States)

    de Sá, Joceline Cássia Ferezini; Costa, Eduardo Caldas; da Silva, Ester; Azevedo, George Dantas

    2013-09-01

    Polycystic ovary syndrome (PCOS) is an endocrine disorder associated with several cardiometabolic risk factors, such as central obesity, insulin resistance, type 2 diabetes, metabolic syndrome, and hypertension. These factors are associated with adrenergic overactivity, which is an important prognostic factor for the development of cardiovascular disorders. Given the common cardiometabolic disturbances occurring in PCOS women, over the last years studies have investigated the cardiac autonomic control of these patients, mainly based on heart rate variability (HRV). Thus, in this review, we will discuss the recent findings of the studies that investigated the HRV of women with PCOS, as well as noninvasive methods of analysis of autonomic control starting from basic indexes related to this methodology.

  14. Development of method for experimental determination of wheel–rail contact forces and contact point position by using instrumented wheelset

    International Nuclear Information System (INIS)

    Bižić, Milan B; Petrović, Dragan Z; Tomić, Miloš C; Djinović, Zoran V

    2017-01-01

    This paper presents the development of a unique method for experimental determination of wheel–rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel–rail contact forces Q and Y and their ratio Y / Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y / Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel–rail contact forces and contact point position using IWS. (paper)

  15. A finite difference method for space fractional differential equations with variable diffusivity coefficient

    KAUST Repository

    Mustapha, K.

    2017-06-03

    Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.

  16. Topology Optimization Design of 3D Continuum Structure with Reserved Hole Based on Variable Density Method

    Directory of Open Access Journals (Sweden)

    Bai Shiye

    2016-05-01

    Full Text Available An objective function defined by minimum compliance of topology optimization for 3D continuum structure was established to search optimal material distribution constrained by the predetermined volume restriction. Based on the improved SIMP (solid isotropic microstructures with penalization model and the new sensitivity filtering technique, basic iteration equations of 3D finite element analysis were deduced and solved by optimization criterion method. All the above procedures were written in MATLAB programming language, and the topology optimization design examples of 3D continuum structure with reserved hole were examined repeatedly by observing various indexes, including compliance, maximum displacement, and density index. The influence of mesh, penalty factors, and filter radius on the topology results was analyzed. Computational results showed that the finer or coarser the mesh number was, the larger the compliance, maximum displacement, and density index would be. When the filtering radius was larger than 1.0, the topology shape no longer appeared as a chessboard problem, thus suggesting that the presented sensitivity filtering method was valid. The penalty factor should be an integer because iteration steps increased greatly when it is a noninteger. The above modified variable density method could provide technical routes for topology optimization design of more complex 3D continuum structures in the future.

  17. Development and validation of a new fallout transport method using variable spectral winds

    International Nuclear Information System (INIS)

    Hopkins, A.T.

    1984-01-01

    A new method was developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using spectral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud

  18. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    Energy Technology Data Exchange (ETDEWEB)

    Bhaskar, Roy, E-mail: imbhaskarall@gmail.com [Indian Institute of Technology (India); University of Connecticut, Farmington, CT (United States); Ghatak, Sobhendu [Indian Institute of Technology (India)

    2013-10-15

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients.

  19. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    International Nuclear Information System (INIS)

    Bhaskar, Roy; Ghatak, Sobhendu

    2013-01-01

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients

  20. A finite difference method for space fractional differential equations with variable diffusivity coefficient

    KAUST Repository

    Mustapha, K.; Furati, K.; Knio, Omar; Maitre, O. Le

    2017-01-01

    Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.