WorldWideScience

Sample records for attribution measuring uncertainty

  1. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  2. Measurement uncertainty.

    Science.gov (United States)

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  3. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  4. Measurement uncertainty relations

    Energy Technology Data Exchange (ETDEWEB)

    Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  5. Traceability and Measurement Uncertainty

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    2004-01-01

    respects necessary scientific precision and problem-solving approach of the field of engineering studies. Competences should be presented in a way that is methodologically and didactically optimised for employees with a mostly work-based vocational qualification and should at the same time be appealing...... and motivating to this important group. The developed e-learning system consists on 12 different chapters dealing with the following topics: 1. Basics 2. Traceability and measurement uncertainty 3. Coordinate metrology 4. Form measurement 5. Surface testing 6. Optical measurement and testing 7. Measuring rooms 8....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e...

  6. On Uncertainties in Successive Measurements

    CERN Document Server

    Distler, Jacques

    2012-01-01

    When you measure an observable, A, in Quantum Mechanics, the state of the system changes. This, in turn, affects the quantum-mechanical uncertainty in some non-commuting observable, B. The standard Uncertainty Relation puts a lower bound on the uncertainty of B in the initial state. What is relevant for a subsequent measurement of B, however, is the uncertainty of B in the post-measurement state. We make some remarks on the latter problem, both in the case where A has a pure point spectrum and in the case where A has a continuous spectrum.

  7. Incentive salience attribution under reward uncertainty: A Pavlovian model.

    Science.gov (United States)

    Anselme, Patrick

    2015-02-01

    There is a vast literature on the behavioural effects of partial reinforcement in Pavlovian conditioning. Compared with animals receiving continuous reinforcement, partially rewarded animals typically show (a) a slower development of the conditioned response (CR) early in training and (b) a higher asymptotic level of the CR later in training. This phenomenon is known as the partial reinforcement acquisition effect (PRAE). Learning models of Pavlovian conditioning fail to account for it. In accordance with the incentive salience hypothesis, it is here argued that incentive motivation (or 'wanting') plays a more direct role in controlling behaviour than does learning, and reward uncertainty is shown to have an excitatory effect on incentive motivation. The psychological origin of that effect is discussed and a computational model integrating this new interpretation is developed. Many features of CRs under partial reinforcement emerge from this model.

  8. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    Science.gov (United States)

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  9. Facility Measurement Uncertainty Analysis at NASA GRC

    Science.gov (United States)

    Stephens, Julia; Hubbard, Erin

    2016-01-01

    This presentation provides and overview of the measurement uncertainty analysis currently being implemented in various facilities at NASA GRC. This presentation includes examples pertinent to the turbine engine community (mass flow and fan efficiency calculation uncertainties.

  10. Where does the uncertainty come from? Attributing Uncertainty in Conceptual Hydrologic Modelling

    Science.gov (United States)

    Abu Shoaib, S.; Marshall, L. A.; Sharma, A.

    2015-12-01

    Defining an appropriate forecasting model is a key phase in water resources planning and design. Quantification of uncertainty is an important step in the development and application of hydrologic models. In this study, we examine the dependency of hydrologic model uncertainty on the observed model inputs, defined model structure, parameter optimization identifiability and identified likelihood. We present here a new uncertainty metric, the Quantile Flow Deviation or QFD, to evaluate the relative uncertainty due to each of these sources under a range of catchment conditions. Through the metric, we may identify the potential spectrum of uncertainty and variability in model simulations. The QFD assesses uncertainty by estimating the deviation in flows at a given quantile across a range of scenarios. By using a quantile based metric, the change in uncertainty across individual percentiles can be assessed, thereby allowing uncertainty to be expressed as a function of time. The QFD method can be disaggregated to examine any part of the modelling process including the selection of certain model subroutines or forcing data. Case study results (including catchments in Australia and USA) suggest that model structure selection is vital irrespective of the flow percentile of interest or the catchment being studied. Examining the QFD across various quantiles additionally demonstrates that lower yielding catchments may have greater variation due to selected model structures. By incorporating multiple model structures, it is possible to assess (i) the relative importance of various sources of uncertainty, (ii) how these vary with the change in catchment location or hydrologic regime; and (iii) the impact of the length of available observations in uncertainty quantification.

  11. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  12. Impact of uncertainty in attributing modeled North American terrestrial carbon fluxes to anthropogenic forcings

    Science.gov (United States)

    Ricciuto, D. M.

    2015-12-01

    Although much progress has been made in the past decade in constraining the net North American terrestrial carbon flux, considerable uncertainty remains in the sink magnitude and trend. Terrestrial carbon cycle models are increasing in spatial resolution, complexity and predictive skill, allowing for increased process-level understanding and attribution of net carbon fluxes to specific causes. Here we examine the various sources of uncertainty, including driver uncertainty, model parameter uncertainty, and structural uncertainty, and the contribution of each type uncertainty to the net sink, and the attribution of this sink to anthropogenic causes: Increasing CO2 concentrations, nitrogen deposition, land use change, and changing climate. To examine driver and parameter uncertainty, model simulations are performed using the Community Land Model version 4.5 (CLM4.5) with literature-based parameter ranges and three different reanalysis meteorological forcing datasets. We also examine structural uncertainty thorough analysis of the Multiscale Terrestrial Model Intercomparison (MsTMIP). Identififying major sources of uncertainty can help to guide future observations, experiments, and model development activities.

  13. Exploring the uncertainty in attributing sediment contributions in fingerprinting studies due to uncertainty in determining element concentrations in source areas.

    Science.gov (United States)

    Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David

    2016-04-01

    One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual

  14. Uncertainty under quantum measures and quantum memory

    Science.gov (United States)

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing

    2017-04-01

    The uncertainty principle restricts potential information one gains about physical properties of the measured particle. However, if the particle is prepared in entanglement with a quantum memory, the corresponding entropic uncertainty relation will vary. Based on the knowledge of correlations between the measured particle and quantum memory, we have investigated the entropic uncertainty relations for two and multiple measurements and generalized the lower bounds on the sum of Shannon entropies without quantum side information to those that allow quantum memory. In particular, we have obtained generalization of Kaniewski-Tomamichel-Wehner's bound for effective measures and majorization bounds for noneffective measures to allow quantum side information. Furthermore, we have derived several strong bounds for the entropic uncertainty relations in the presence of quantum memory for two and multiple measurements. Finally, potential applications of our results to entanglement witnesses are discussed via the entropic uncertainty relation in the absence of quantum memory.

  15. Symbolic computation for evaluation of measurement uncertainty

    OpenAIRE

    Wei, P.; Yang, QP; Salleh; Jones, BE

    2007-01-01

    In recent years, with the rapid development of symbolic computation, the integration of symbolic and numeric methods is increasingly applied in various applications. This paper proposed the use of symbolic computation for the evaluation of measurement uncertainty. The general method and procedure are discussed, and its great potential and powerful features for measurement uncertainty evaluation has been demonstrated through examples.

  16. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  17. Uncertainty of measurement: an immunology laboratory perspective.

    Science.gov (United States)

    Beck, Sarah C; Lock, Robert J

    2015-01-01

    'Measurement uncertainty of measured quantity values' (ISO15189) requires that the laboratory shall determine the measurement uncertainty for procedures used to report measured quantity values on patients' samples. Where we have numeric data measurement uncertainty can be expressed as the standard deviation or as the co-efficient of variation. However, in immunology many of the assays are reported either as semi-quantitative (i.e. an antibody titre) or qualitative (positive or negative) results. In the latter context, measuring uncertainty is considerably more difficult. There are, however, strategies which can allow us to minimise uncertainty. A number of parameters can contribute to making measurements uncertain. These include bias, precision, standard uncertainty (expressed as standard deviation or coefficient of variation), sensitivity, specificity, repeatability, reproducibility and verification. Closely linked to these are traceability and standardisation. In this article we explore the challenges presented to immunology with regard to measurement uncertainty. Many of these challenges apply equally to other disciplines working with qualitative or semi-quantitative data.

  18. Quantifying relative uncertainties in the detection and attribution of human-induced climate change on winter streamflow

    Science.gov (United States)

    Ahn, Kuk-Hyun; Merwade, Venkatesh; Ojha, C. S. P.; Palmer, Richard N.

    2016-11-01

    In spite of recent popularity for investigating human-induced climate change in regional areas, understanding the contributors to the relative uncertainties in the process remains unclear. To remedy this, this study presents a statistical framework to quantify relative uncertainties in a detection and attribution study. Primary uncertainty contributors are categorized into three types: climate data, hydrologic, and detection uncertainties. While an ensemble of climate models is used to define climate data uncertainty, hydrologic uncertainty is defined using a Bayesian approach. Before relative uncertainties in the detection and attribution study are quantified, an optimal fingerprint-based detection and attribution analysis is employed to investigate changes in winter streamflow in the Connecticut River Basin, which is located in the Eastern United States. Results indicate that winter streamflow over a period of 64 years (1950-2013) lies outside the range expected from natural variability of climate alone with a 90% confidence interval in the climate models. Investigation of relative uncertainties shows that the uncertainty linked to the climate data is greater than the uncertainty induced by hydrologic modeling. Detection uncertainty, defined as the uncertainty related to time evolution of the anthropogenic climate change in the historical data (signal) above the natural internal climate variability (noise), shows that uncertainties in natural internal climate variability (piControl) scenarios may be the source of the significant degree of uncertainty in the regional Detection and Attribution study.

  19. Uncertainty of dose measurement in radiation processing

    DEFF Research Database (Denmark)

    Miller, A.

    1996-01-01

    The major standard organizations of the world have addressed the issue of reporting uncertainties in measurement reports and certificates. There is, however, still some ambiguity in the minds of many people who try to implement the recommendations in real life. This paper is a contribution...... to the running debate and presents the author's view, which is based upon experience in radiation processing dosimetry. The origin of all uncertainty components must be identified and can be classified according to Type A and Type B, but it is equally important to separate the uncertainty components into those...... that contribute to the observable uncertainty of repeated measurements and those that do not. Examples of the use of these principles are presented in the paper....

  20. Not Normal: the uncertainties of scientific measurements

    Science.gov (United States)

    Bailey, David C.

    2017-01-01

    Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.

  1. Conclusions on measurement uncertainty in microbiology.

    Science.gov (United States)

    Forster, Lynne I

    2009-01-01

    Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.

  2. Measuring the uncertainty of tapping torque

    DEFF Research Database (Denmark)

    Belluco, Walter; De Chiffre, Leonardo

    An uncertainty budget is carried out for torque measurements performed at the Institut for Procesteknik for the evaluation of cutting fluids. Thirty test blanks were machined with one tool and one fluid, torque diagrams were recorded and the repeatability of single torque measurements was estimat...

  3. Teaching Measurement and Uncertainty the GUM Way

    Science.gov (United States)

    Buffler, Andy; Allie, Saalih; Lubben, Fred

    2008-01-01

    This paper describes a course aimed at developing understanding of measurement and uncertainty in the introductory physics laboratory. The course materials, in the form of a student workbook, are based on the probabilistic framework for measurement as recommended by the International Organization for Standardization in their publication "Guide to…

  4. Improving the uncertainty of photomask linewidth measurements

    Science.gov (United States)

    Pedulla, J. M.; Potzick, James; Silver, Richard M.

    2004-05-01

    The National Institute of Standards and Technology (NIST) is currently developing a photomask linewidth standard (SRM 2059) with a lower expected uncertainty of calibration than the previous NIST standards (SRMs 473, 475, 476). In calibrating these standards, optical simulation modeling has been used to predict the microscope image intensity profiles, which are then compared to the experimental profiles to determine the certified linewidths. Consequently, the total uncertainty in the linewidth calibration is a result of uncertainty components from the optical simulation modeling and uncertainty due to experimental errors or approximations (e.g., tool imaging errors and material characterization errors). Errors of approximation in the simulation model and uncertainty in the parameters used in the model can contribute a large component to the total linewidth uncertainty. We have studied the effects of model parameter variation on measurement uncertainty using several different optical simulation programs that utilize different mathematical techniques. We have also evaluated the effects of chrome edge runout and varying indices of refraction on the linewidth images. There are several experimental parameters that are not ordinarily included in the modeling simulation. For example, the modeling programs assume a uniform illuminating field (e.g., Koehler illumination), ideal optics and perfect optical alignment. In practice, determining whether Koehler illumination has been achieved is difficult, and the optical components and their alignments are never ideal. We will present some techniques for evaluating Koehler illumination and methods to compensate for scattered (flare) light. Any such experimental elements, that are assumed accurate in the modeling, may actually present significant components to the uncertainty and need to be quantitatively estimated. The present state of metrology does not permit the absolute calibration of linewidth standards to the level of

  5. On the Measurement of Randomness (Uncertainty: A More Informative Entropy

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2016-04-01

    Full Text Available As a measure of randomness or uncertainty, the Boltzmann–Shannon entropy H has become one of the most widely used summary measures of a variety of attributes (characteristics in different disciplines. This paper points out an often overlooked limitation of H: comparisons between differences in H-values are not valid. An alternative entropy H K is introduced as a preferred member of a new family of entropies for which difference comparisons are proved to be valid by satisfying a given value-validity condition. The H K is shown to have the appropriate properties for a randomness (uncertainty measure, including a close linear relationship to a measurement criterion based on the Euclidean distance between probability distributions. This last point is demonstrated by means of computer generated random distributions. The results are also compared with those of another member of the entropy family. A statistical inference procedure for the entropy H K is formulated.

  6. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  7. Uncertainties in the attribution of greenhouse gas warming and implications for climate prediction

    Science.gov (United States)

    Jones, Gareth S.; Stott, Peter A.; Mitchell, John F. B.

    2016-06-01

    Using optimal detection techniques with climate model simulations, most of the observed increase of near-surface temperatures over the second half of the twentieth century is attributed to anthropogenic influences. However, the partitioning of the anthropogenic influence to individual factors, such as greenhouse gases and aerosols, is much less robust. Differences in how forcing factors are applied, in their radiative influence and in models' climate sensitivities, substantially influence the response patterns. We find that standard optimal detection methodologies cannot fully reconcile this response diversity. By selecting a set of experiments to enable the diagnosing of greenhouse gases and the combined influence of other anthropogenic and natural factors, we find robust detections of well-mixed greenhouse gases across a large ensemble of models. Of the observed warming over the twentieth century of 0.65 K/century we find, using a multimodel mean not incorporating pattern uncertainty, a well-mixed greenhouse gas warming of 0.87 to 1.22 K/century. This is partially offset by cooling from other anthropogenic and natural influences of -0.54 to -0.22 K/century. Although better constrained than recent studies, the attributable trends across climate models are still wide, with implications for observational constrained estimates of transient climate response. Some of the uncertainties could be reduced in future by having more model data to better quantify the simulated estimates of the signals and natural variability, by designing model experiments more effectively and better quantification of the climate model radiative influences. Most importantly, how model pattern uncertainties are incorporated into the optimal detection methodology should be improved.

  8. Evaluation of measurement uncertainties in EUV scatterometry

    Science.gov (United States)

    Gross, H.; Scholze, F.; Rathsfeld, A.; Bär, M.

    2009-06-01

    Scatterometry, the analysis of light diffracted from a periodic structure, is a versatile metrology tool for characterizing periodic surface structures, regarding the critical dimension (CD) and other properties of the surface profile. For extreme ultraviolet (EUV) masks, only EUV radiation provides direct information on the mask performance comparable to the operating regime in an EUV lithography tool. With respect to the small feature dimensions on EUV masks, the short wavelength of EUV is also advantageous since it provides a large number of diffraction orders from the periodic structures irradiated. We present measurements at a prototype EUV mask with large fields of periodic lines-space structures using an EUV reflectometer at the Berlin storage ring BESSY II and discuss the corresponding reconstruction results with respect to their measurement uncertainties. As a non-imaging indirect optical method scatterometry requires the solution of the inverse problem, i.e., the determination of the geometry parameters describing the surface profile from the measured light diffraction patterns. In the time-harmonic case the numerical simulation of the diffraction process for periodic 2D structures can be realized by the finite element solution of the two-dimensional Helmholtz equation. Restricting the solutions to a class of surface profiles and fixing the set of measurements, the inverse problem can be formulated as a nonlinear operator equation in Euclidean space. The operator maps the profile parameters to special efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation, i.e., we minimize the deviation of the calculated efficiencies from the measured ones by variation of the geometry parameters. The uncertainties of the reconstructed geometry parameters depend on the uncertainties of the input data and can be estimated by statistical methods like Monte Carlo or the covariance method applied to the

  9. Inconclusive quantum measurements and decisions under uncertainty

    CERN Document Server

    Yukalov, V I

    2016-01-01

    We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a ge...

  10. Uncertainties in the attribution of greenhouse gas warming and implications for climate prediction

    CERN Document Server

    Jones, Gareth S; Mitchell, John F B

    2016-01-01

    Using optimal detection techniques with climate model simulations, most of the observed increase of near surface temperatures over the second half of the twentieth century is attributed to anthropogenic influences. However, the partitioning of the anthropogenic influence to individual factors, such as greenhouse gases and aerosols, is much less robust. Differences in how forcing factors are applied, in their radiative influence and in models' climate sensitivities, substantially influence the response patterns. We find standard optimal detection methodologies cannot fully reconcile this response diversity. By selecting a set of experiments to enable the diagnosing of greenhouse gases and the combined influence of other anthropogenic and natural factors, we find robust detections of well mixed greenhouse gases across a large ensemble of models. Of the observed warming over the 20th century of 0.65K/century we find, using a multi model mean not incorporating pattern uncertainty, a well mixed greenhouse gas warm...

  11. Schoolteacher Trainees' Difficulties about the Concepts of Attribute and Measurement

    Science.gov (United States)

    Passelaigue, Dominique; Munier, Valérie

    2015-01-01

    "Attribute" and "measurement" are two fundamental concepts in mathematics and physics. Teaching these concepts is essential even in elementary school, but numerous studies have pointed out pupils' difficulties with them. These studies emphasized that pupils must learn about attributes before being taught how to measure these…

  12. Optimizing step gauge measurements and uncertainties estimation

    Science.gov (United States)

    Hennebelle, F.; Coorevits, T.; Vincent, R.

    2017-02-01

    According to the standard ISO 10360-2 (2001 Geometrical product specifications (GPS)—acceptance and reverification tests for coordinate measuring machines (CMM)—part 2: CMMs used for measuring size (ISO 10360-2:2001)), we verify the coordinate measuring machine (CMM) performance against the manufacturer specification. There are many types of gauges used for the calibration and verification of CMMs. The step gauges with parallel faces (KOBA, MITUTOYO) are well known gauges to perform this test. Often with these gauges, only the unidirectional measurements are considered which avoids having to deal with a residual error that affects the tip radius compensation. However the ISO 10360-2 standard imposes the use of a bidirectional measurement. Thus, the bidirectional measures must be corrected by the residual constant offset probe. In this paper, we optimize the step gauge measurement and a method is given to mathematically avoid the problem of the constant offset of the tip radius. This method involves measuring the step gauge once and to measure it a second time with a shift of one slot in order to obtain a new set of equations. Uncertainties are also presented.

  13. Measurement Uncertainties in Science and Technology

    CERN Document Server

    Grabe, Michael

    2005-01-01

    At the turn of the 19th century, Carl Friedrich Gauß founded error calculus by predicting the then unknown position of the planet Ceres. Ever since, error calculus has occupied a place at the heart of science. In this book, Grabe illustrates the breakdown of traditional error calculus in the face of modern measurement techniques. Revising Gauß’ error calculus ab initio, he treats random and unknown systematic errors on an equal footing from the outset. Furthermore, Grabe also proposes what may be called well defined measuring conditions, a prerequisite for defining confidence intervals that are consistent with basic statistical concepts. The resulting measurement uncertainties are as robust and reliable as required by modern-day science, engineering and technology.

  14. Multi-attribute mate choice decisions and uncertainty in the decision process: a generalized sequential search strategy.

    Science.gov (United States)

    Wiegmann, Daniel D; Weinersmith, Kelly L; Seubert, Steven M

    2010-04-01

    The behavior of females in search of a mate determines the likelihood that high quality males are encountered and adaptive search strategies rely on the effective use of available information on the quality of prospective mates. The sequential search strategy was formulated, like most models of search behavior, on the assumption that females obtain perfect information on the quality of encountered males. In this paper, we modify the strategy to allow for uncertainty of male quality and we determine how the magnitude of this uncertainty and the ability of females to inspect multiple male attributes to reduce uncertainty influence mate choice decisions. In general, searchers are sensitive to search costs and higher costs lower acceptance criteria under all versions of the model. The choosiness of searchers increases with the variability of the quality of prospective mates under conditions of the original model, but under conditions of uncertainty the choosiness of searchers may increase or decrease with the variability of inspected male attributes. The behavioral response depends on the functional relationship between observed male attributes and the fitness return to searchers and on costs associated with the search process. Higher uncertainty often induces searchers to pay more for information and under conditions of uncertainty the fitness return to searchers is never higher than under conditions of the original model. Further studies of the performance of alternative search strategies under conditions of uncertainty may consequently be necessary to identify search strategies likely to be used under natural conditions.

  15. Uncertainty Analysis Technique for OMEGA Dante Measurements

    Energy Technology Data Exchange (ETDEWEB)

    May, M J; Widmann, K; Sorce, C; Park, H; Schneider, M

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.

  16. Improving Attribute-Importance Measurement : a Reference-Point Approach

    NARCIS (Netherlands)

    Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.

    2004-01-01

    Despite the importance of identifying the hierarchy of product attributes that drive judgment and choice, the many available methods remain limited regarding their convergent validity and test-retest reliability. To increase the validity and reliability of attribute-importance measurement, we focus

  17. The Validity of Attribute-Importance Measurement: A Review

    NARCIS (Netherlands)

    Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.

    2007-01-01

    A critical review of the literature demonstrates a lack of validity among the ten most common methods for measuring the importance of attributes in behavioral sciences. The authors argue that one of the key determinants of this lack of validity is the multi-dimensionality of attribute importance. Bu

  18. Uncertainties in pipeline water percentage measurement

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bentley N.

    2005-07-01

    Measurement of the quantity, density, average temperature and water percentage in petroleum pipelines has been an issue of prime importance. The methods of measurement have been investigated and have seen continued improvement over the years. Questions are being asked as to the reliability of the measurement of water in the oil through sampling systems originally designed and tested for a narrow range of densities. Today most facilities sampling systems handle vastly increased ranges of density and types of crude oils. Issues of pipeline integrity, product loss and production balances are placing further demands on the issues of accurate measurement. Water percentage is one area that has not received the attention necessary to understand the many factors involved in making a reliable measurement. A previous paper1 discussed the issues of uncertainty of the measurement from a statistical perspective. This paper will outline many of the issues of where the errors lie in the manual and automatic methods in use today. A routine to use the data collected by the analyzers in the on line system for validation of the measurements will be described. (author) (tk)

  19. Inconclusive quantum measurements and decisions under uncertainty

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2016-04-01

    Full Text Available We give a mathematical definition for the notion of inconclusive quantum measurements.In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy withthe theory of quantum measurements, the inconclusive quantum measurements correspond,in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluationof the considered prospect, and of an attraction factor, characterizing irrational,subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example,we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.

  20. Inconclusive quantum measurements and decisions under uncertainty

    Science.gov (United States)

    Yukalov, Vyacheslav; Sornette, Didier

    2016-04-01

    We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.

  1. ATTRIBUTES AND THRESHOLDS IN MEASUREMENTS FOR TRANSPARENCY INITIATIVES

    Energy Technology Data Exchange (ETDEWEB)

    M. W. JOHNSON

    2000-09-01

    The collection of programs broadly termed Transparency Initiatives frequently involves physics measurements that are applied to items with sensitive or classified properties. The inability or reluctance to perform quantitative measurements, in the safeguards tradition, to such items, and then to expose the results to international examination, has impelled development of an attributes approach to measurements, following the philosophy if it looks like a duck, walks like a duck and quacks like a duck, call it a duck, This approach avoids certain of the classification issues that would otherwise be associated with such measurements. Use of the attributes approach, however, continues to pose problems of interpretation, in light of the need to establish numerical thresholds whereby data obtained from the measurements can be evaluated to determine whether the attribute is present. In this paper we examine the foundations of the attributes approach and the steps used to determine appropriate attributes and thresholds, using examples from contemporary threat-reduction initiatives where possible. Implications for the detector technologies used in the measurements will be discussed, as will the characteristics of so-called information barriers intended to prevent inadvertent release of sensitive information during attributes measurements.

  2. Estimating discharge measurement uncertainty using the interpolated variance estimator

    Science.gov (United States)

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  3. Measures of uncertainty in power split systems

    OpenAIRE

    Özdemir, Serhan

    2007-01-01

    This paper discusses the overlooked uncertainty inherent in every transmission. The uncertainty aspect has been often, for the sake of clarity, ignored. Instead, mechanical transmissions have been characterized traditionally by their transmission efficacies. It is known that transmission localities are sources of power loss, depending on many factors, hence sources of uncertainty. Thus each transmission of power should not only be designated by a constant of efficiency but also by an expressi...

  4. Review of Prior U.S. Attribute Measurement Systems

    Energy Technology Data Exchange (ETDEWEB)

    White, G K

    2012-07-06

    Attribute Measurement Systems have been developed and demonstrated several times in the United States over the last decade or so; under the Trilateral Initiative (1996-2002), FMTTD (Fissile Material Transparency Technology Demonstration, 2000), and NG-AMS (Next Generation Attribute Measurement System, 2005-2008). Each Attribute Measurement System has contributed to the growing body of knowledge regarding the use of such systems in warhead dismantlement and other Arms Control scenarios. The Trilateral Initiative, besides developing prototype hardware/software, introduced the topic to the international community. The 'trilateral' parties included the United States, the Russian Federation, and the International Atomic Energy Agency (IAEA). With the participation of a Russian delegation, the FMTTD demonstrated that measurements behind an information barrier are feasible while meeting host party security requirements. The NG-AMS system explored the consequences of maximizing the use of Commercial off the Shelf (COTS) equipment, which made construction easier, but authentication harder. The 3rd Generation Attribute Measurement System (3G-AMS) will further the scope of previous systems by including additional attributes and more rigor in authentication.

  5. Using a Meniscus to Teach Uncertainty in Measurement

    Science.gov (United States)

    Backman, Philip

    2008-01-01

    I have found that students easily understand that a measurement cannot be exact, but they often seem to lack an understanding of why it is important to know "something" about the magnitude of the uncertainty. This tends to promote an attitude that almost any uncertainty value will do. Such indifference may exist because once an uncertainty is…

  6. Attribute measure recognition approach and its applications to emitter recognition

    Institute of Scientific and Technical Information of China (English)

    GUAN Xin; HE You; YI Xiao

    2005-01-01

    This paper studies the emitter recognition problem. A new recognition method based on attribute measure for emitter recognition is put forward. The steps of the method are presented. The approach to determining the weight coefficient is also discussed. Moreover, considering the temporal redundancy of emitter information detected by multi-sensor system, this new recognition method is generalized to multi-sensor system. A method based on the combination of attribute measure and D-S evidence theory is proposed. The implementation of D-S reasoning is always restricted by basic probability assignment function. Constructing basic probability assignment function based on attribute measure is presented in multi-sensor recognition system. Examples of recognizing the emitter purpose and system are selected to demonstrate the method proposed. Experimental results show that the performance of this new method is accurate and effective.

  7. Adaptive framework for uncertainty analysis in electromagnetic field measurements.

    Science.gov (United States)

    Prieto, Javier; Alonso, Alonso A; de la Rosa, Ramón; Carrera, Albano

    2015-04-01

    Misinterpretation of uncertainty in the measurement of the electromagnetic field (EMF) strength may lead to an underestimation of exposure risk or an overestimation of required measurements. The Guide to the Expression of Uncertainty in Measurement (GUM) has internationally been adopted as a de facto standard for uncertainty assessment. However, analyses under such an approach commonly assume unrealistic static models or neglect relevant prior information, resulting in non-robust uncertainties. This study proposes a principled and systematic framework for uncertainty analysis that fuses information from current measurements and prior knowledge. Such a framework dynamically adapts to data by exploiting a likelihood function based on kernel mixtures and incorporates flexible choices of prior information by applying importance sampling. The validity of the proposed techniques is assessed from measurements performed with a broadband radiation meter and an isotropic field probe. The developed framework significantly outperforms GUM approach, achieving a reduction of 28% in measurement uncertainty.

  8. Measurement of Uncertainty for Vaporous Ethanol Concentration Analyzed by Intoxilyzer® 8000 Instruments.

    Science.gov (United States)

    Hwang, Rong-Jen; Rogers, Craig; Beltran, Jada; Razatos, Gerasimos; Avery, Jason

    2016-06-01

    Reporting a measurement of uncertainty helps to determine the limitations of the method of analysis and aids in laboratory accreditation. This laboratory has conducted a study to estimate a reasonable uncertainty for the mass concentration of vaporous ethanol, in g/210 L, by the Intoxilyzer(®) 8000 breath analyzer. The uncertainty sources used were: gas chromatograph (GC) calibration adjustment, GC analytical, certified reference material, Intoxilyzer(®) 8000 calibration adjustment and Intoxilyzer(®) 8000 analytical. Standard uncertainties attributed to these sources were calculated and separated into proportional and constant standard uncertainties. Both the combined proportional and the constant standard uncertainties were further combined to an expanded uncertainty as both a percentage and an unit. To prevent any under reporting of the expanded uncertainty, 0.10 g/210 L was chosen as the defining point for expressing the expanded uncertainty. For the Intoxilyzer(®) 8000, all vaporous ethanol results at or above 0.10 g/210 L, the expanded uncertainty will be reported as ±3.6% at a confidence level of 95% (k = 2); for vaporous ethanol results below 0.10 g/210 L, the expanded uncertainty will be reported as ±0.0036 g/210 L at a confidence level of 95% (k = 2).

  9. The uncertainties of magnetic properties measurements of electrical sheet steel

    CERN Document Server

    Ahlers, H

    2000-01-01

    In this work, uncertainties in measurements of magnetic properties of Epstein- and single-sheet samples have been determined according to the 'Guide To The Expression Of Uncertainty In Measurement', [International Organization for Standardization (1993)]. They were calculated for the results at predicted values of parameters taking into account the non-linear dependences. The measurement results and the uncertainties are calculated simultaneously by a computer program.

  10. Uncertainty budget for optical coordinate measurements of circle diameter

    DEFF Research Database (Denmark)

    Morace, Renate Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2004-01-01

    An uncertainty analysis for circle diameter measurements using a coordinate measuring machine (CMM) equipped with an optical probe is presented in this paper. A mathematical model for data evaluation and uncertainty assessment was formulated in accordance with Guide to the Expression of Uncertainty...... in Measurement (GUM). Various input quantities such as CCD camera resolution, influence of illuminating system, CMM errors etc. were considered in the model function and experimentally investigated....

  11. Uncertainty Measures in Ordered Information System Based on Approximation Operators

    Directory of Open Access Journals (Sweden)

    Bingjiao Fan

    2014-01-01

    Full Text Available This paper focuses on constructing uncertainty measures by the pure rough set approach in ordered information system. Four types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness, approximation quality, approximation accuracy, dependency degree, and importance degree are investigated. Theoretical analysis indicates that all the four types can be used to evaluate the uncertainty in ordered information system, especially that we find that the essence of the first type and the third type is the same. To interpret and help understand the approach, experiments about real-life data sets have been conducted to test the four types of uncertainty measures. From the results obtained, it can be shown that these uncertainty measures can surely measure the uncertainty in ordered information system.

  12. International Target Values for Measurement Uncertainties in Nuclear Material Accountancy

    Institute of Scientific and Technical Information of China (English)

    LIU; Hong-bin; GAO; Qiang

    2012-01-01

    <正>The IAEA has published a revised version International Target Values (ITVs) 2010 for Measurement Uncertainties in Safeguarding Nuclear Materials in 2010. The report proposes the international target values of measurement uncertainties of the routine measurement methods for the nuclear material accountancy.

  13. Evaluation of an attributive measurement system in the automotive industry

    Science.gov (United States)

    Simion, C.

    2016-08-01

    Measurement System Analysis (MSA) is a critical component for any quality improvement process. MSA is defined as an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability and it falls into two categories: attribute and variable. Most problematic measurement system issues come from measuring attribute data, which are usually the result of human judgment (visual inspection). Because attributive measurement systems are often used in some manufacturing processes, their assessment is important to obtain the confidence in the inspection process, to see where are the problems in order to eliminate them and to guide the process improvement. It was the aim of this paper to address such a issue presenting a case study made in a local company from the Sibiu region supplying products for the automotive industry, specifically the bag (a technical textile component, i.e. the fabric) for the airbag module. Because defects are inherent in every manufacturing process and in the field of airbag systems a minor defect can influence their performance and lives depend on the safety feature, there is a stringent visual inspection required on the defects of the bag material. The purpose of this attribute MSA was: to determine if all inspectors use the same criteria to determine “pass” from “fail” product (i.e. the fabric); to assess company inspection standards against customer's requirements; to determine how well inspectors are conforming to themselves; to identify how inspectors are conforming to a “known master,” which includes: how often operators ship defective product, how often operators dispose of acceptable product; to discover areas where training is required, procedures must be developed and standards are not available. The results were analyzed using MINITAB software with its module called Attribute Agreement Analysis. The conclusion was that the inspection process must

  14. Assessment of dose measurement uncertainty using RisøScan

    DEFF Research Database (Denmark)

    Helt-Hansen, J.; Miller, A.

    2006-01-01

    The dose measurement uncertainty of the dosimeter system RisoScan, office scanner and Riso B3 dosimeters has been assessed by comparison with spectrophotometer measurements of the same dosimeters. The reproducibility and the combined uncertainty were found to be approximately 2% and 4%, respectiv...

  15. Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.

    Science.gov (United States)

    Meyer, Veronika R

    2003-09-01

    Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.

  16. Relating confidence to measured information uncertainty in qualitative reasoning

    Energy Technology Data Exchange (ETDEWEB)

    Chavez, Gregory M [Los Alamos National Laboratory; Zerkle, David K [Los Alamos National Laboratory; Key, Brian P [Los Alamos National Laboratory; Shevitz, Daniel W [Los Alamos National Laboratory

    2010-10-07

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to relate confidence to the available information uncertainty in the result using fuzzy sets. Information uncertainty can be quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values.

  17. SLEAS: Supervised Learning using Entropy as Attribute Selection Measure

    Directory of Open Access Journals (Sweden)

    Kishor Kumar Reddy C

    2014-10-01

    Full Text Available There is embryonic importance in scaling up the broadly used decision tree learning algorithms to huge datasets. Even though abundant diverse methodologies have been proposed, a fast tree growing algorithm without substantial decrease in accuracy and substantial increase in space complexity is essential to a greater extent. This paper aims at improving the performance of the SLIQ (Supervised Learning in Quest decision tree algorithm for classification in data mining. In the present research, we adopted entropy as attribute selection measure, which overcomes the problems facing with Gini Index. Classification accuracy of the proposed supervised learning using entropy as attribute selection measure (SLEAS algorithm is compared with the existing SLIQ algorithm using twelve datasets taken from UCI Machine Learning Repository, and the results yields that the SLEAS outperforms when compared with SLIQ decision tree. Further, error rate is also computed and the results clearly show that the SLEAS algorithm is giving less error rate when compared with SLIQ decision tree.

  18. Dimensional measurements with submicrometer uncertainty in production environment

    DEFF Research Database (Denmark)

    De Chiffre, L.; Gudnason, M. M.; Madruga, D.

    2015-01-01

    The work concerns a laboratory investigation of a method to achieve dimensional measurements with submicrometer uncertainty under conditions that are typical of a production environment. The method involves the concurrent determination of dimensions and material properties from measurements carried...

  19. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  20. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  1. Attributing runoff changes to climate variability and human activities: uncertainty analysis using four monthly water balance models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shuai; Xiong, Lihua; Li, Hong-Yi; Leung, L. Ruby; Demissie, Yonas

    2015-05-26

    Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities to runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.

  2. OPEN PUBLIC SPACE ATTRIBUTES AND CATEGORIES – COMPLEXITY AND MEASURABILITY

    Directory of Open Access Journals (Sweden)

    Ljiljana Čavić

    2014-12-01

    Full Text Available Within the field of architectural and urban research, this work addresses the complexity of contemporary public space, both in a conceptual and concrete sense. It aims at systematizing spatial attributes and their categories and discussing spatial complexity and measurability, all this in order to reach a more comprehensive understanding, description and analysis of public space. Our aim is to improve everyday usage of open public space and we acknowledged users as its crucial factor. There are numerous investigations on the complex urban and architectural reality of public space that recognise importance of users. However, we did not find any that would holistically account for what users find essential in public space. Based on the incompleteness of existing approaches on open public space and the importance of users for their success, this paper proposes a user-orientated approach. Through an initial survey directed to users, we collected the most important aspects of public spaces in the way that contemporary humans see them. The gathered data is analysed and coded into spatial attributes from which their role in the complexity of open public space and measurability are discussed. The work results in an inventory of attributes that users find salient in public spaces. It does not discuss their qualitative values or contribution in generating spatial realities. It aims to define them clearly so that any further logical argumentation on open space concerning users may be solidly constructed. Finally, through categorisation of attributes it proposes the disciplinary levels necessary for the analysis of complex urban-architectural reality

  3. Estimation of measurement uncertainty arising from manual sampling of fuels.

    Science.gov (United States)

    Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios

    2013-02-15

    Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty.

  4. Instrumental measurement of beer taste attributes using an electronic tongue

    Energy Technology Data Exchange (ETDEWEB)

    Rudnitskaya, Alisa, E-mail: alisa.rudnitskaya@gmail.com [Chemistry Department, University of Aveiro, Aveiro (Portugal); Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Polshin, Evgeny [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Kirsanov, Dmitry [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Lammertyn, Jeroen; Nicolai, Bart [BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip [Centre for Malting and Brewing Sciences, Katholieke Universiteit Leuven, Heverelee (Belgium); Legin, Andrey [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation)

    2009-07-30

    The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body.

  5. The NIST Simple Guide for Evaluating and Expressing Measurement Uncertainty

    Science.gov (United States)

    Possolo, Antonio

    2016-11-01

    NIST has recently published guidance on the evaluation and expression of the uncertainty of NIST measurement results [1, 2], supplementing but not replacing B. N. Taylor and C. E. Kuyatt's (1994) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (NIST Technical Note 1297) [3], which tracks closely the Guide to the expression of uncertainty in measurement (GUM) [4], originally published in 1995 by the Joint Committee for Guides in Metrology of the International Bureau of Weights and Measures (BIPM). The scope of this Simple Guide, however, is much broader than the scope of both NIST Technical Note 1297 and the GUM, because it attempts to address several of the uncertainty evaluation challenges that have arisen at NIST since the 1990s, for example to include molecular biology, greenhouse gases and climate science measurements, and forensic science. The Simple Guide also expands the scope of those two other guidance documents by recognizing observation equations (that is, statistical models) as bona fide measurement models. These models are indispensable to reduce data from interlaboratory studies, to combine measurement results for the same measurand obtained by different methods, and to characterize the uncertainty of calibration and analysis functions used in the measurement of force, temperature, or composition of gas mixtures. This presentation reviews the salient aspects of the Simple Guide, illustrates the use of models and methods for uncertainty evaluation not contemplated in the GUM, and also demonstrates the NIST Uncertainty Machine [5] and the NIST Consensus Builder, which are web-based applications accessible worldwide that facilitate evaluations of measurement uncertainty and the characterization of consensus values in interlaboratory studies.

  6. Measurement uncertainty of isotopologue fractions in fluxomics determined via mass spectrometry.

    Science.gov (United States)

    Guerrasio, R; Haberhauer-Troyer, C; Steiger, M; Sauer, M; Mattanovich, D; Koellensperger, G; Hann, S

    2013-06-01

    Metabolic flux analysis implies mass isotopomer distribution analysis and determination of mass isotopologue fractions (IFs) of proteinogenic amino acids of cell cultures. In this work, for the first time, this type of analysis is comprehensively investigated in terms of measurement uncertainty by calculating and comparing budgets for different mass spectrometric techniques. The calculations addressed amino acids of Pichia pastoris grown on 10% uniformly (13)C labeled glucose. Typically, such experiments revealed an enrichment of (13)C by at least one order of magnitude in all proteinogenic amino acids. Liquid chromatography-time-of-flight mass spectrometry (LC-TOFMS), liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) analyses were performed. The samples were diluted to fit the linear dynamic range of the mass spectrometers used (10 μM amino acid concentration). The total combined uncertainties of IFs as well as the major uncertainty contributions affecting the IFs were determined for phenylalanine, which was selected as exemplary model compound. A bottom-up uncertainty propagation was performed according to Quantifying Uncertainty in Analytical Measurement and using the Monte Carlo method by considering all factors leading to an IF, i.e., the process of measurement and the addition of (13)C-glucose. Excellent relative expanded uncertainties (k = 1) of 0.32, 0.75, and 0.96% were obtained for an IF value of 0.7 by LC-MS/MS, GC-MS, and LC-TOFMS, respectively. The major source of uncertainty, with a relative contribution of 20-80% of the total uncertainty, was attributed to the signal intensity (absolute counts) uncertainty calculated according to Poisson counting statistics, regardless which of the mass spectrometry platforms was used. Uncertainty due to measurement repeatability was of importance in LC-MS/MS, showing a relative contribution up to 47% of the total uncertainty, whereas for GC-MS and LC

  7. Vector network analyzer (VNA) measurements and uncertainty assessment

    CERN Document Server

    Shoaib, Nosherwan

    2017-01-01

    This book describes vector network analyzer measurements and uncertainty assessments, particularly in waveguide test-set environments, in order to establish their compatibility to the International System of Units (SI) for accurate and reliable characterization of communication networks. It proposes a fully analytical approach to measurement uncertainty evaluation, while also highlighting the interaction and the linear propagation of different uncertainty sources to compute the final uncertainties associated with the measurements. The book subsequently discusses the dimensional characterization of waveguide standards and the quality of the vector network analyzer (VNA) calibration techniques. The book concludes with an in-depth description of the novel verification artefacts used to assess the performance of the VNAs. It offers a comprehensive reference guide for beginners to experts, in both academia and industry, whose work involves the field of network analysis, instrumentation and measurements.

  8. THE UNCERTAINTIES OF ENVIRONMENT'S PARAMETERS MEASUREMENTS AS TOLLS OF THE MEASUREMENTS QUALITY IMPROVEMENT

    Directory of Open Access Journals (Sweden)

    Miroslav Badida

    2008-06-01

    Full Text Available Identification of the noise measuring uncertainties by declared measured values is unconditionally necessary and required by legislative. Uncertainty of the measurements expresses all errors that accrue during the measuring. B y indication of uncertainties the measure documents that the objective value is with certain probability found in the interval that is bounded by the measurement uncertainty. The paper deals with the methodology of the uncertainty calculation by noise measurements in living and working environments. metal processing industry and building materials industry.

  9. Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Sadeghi Sarcheshmah

    2012-01-01

    Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.

  10. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  11. Measurement uncertainty of lactase-containing tablets analyzed with FTIR.

    Science.gov (United States)

    Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia

    2014-01-01

    Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve <5.0% expanded uncertainty level. Uncertainty arising from the FTIR instrument was <1.0%. The uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.

  12. Nab: Measurement Principles, Apparatus and Uncertainties

    CERN Document Server

    Pocanic, D; Alonzi, L P; Baessler, S; Balascuta, S; Bowman, J D; Bychkov, M A; Byrne, J; Calarco, J R; Cianciolo, V; Crawford, C; Frlez, E; Gericke, M T; Greene, G L; Grzywacz, R K; Gudkov, V; Hersman, F W; Klein, A; Martín, J; Page, S A; Palladino, A; Penttila, S I; Rykaczewski, K P; Wilburn, W S; Young, A R; Young, G R

    2008-01-01

    The Nab collaboration will perform a precise measurement of 'a', the electron-neutrino correlation parameter, and 'b', the Fierz interference term in neutron beta decay, in the Fundamental Neutron Physics Beamline at the SNS, using a novel electric/magnetic field spectrometer and detector design. The experiment is aiming at the 10^{-3} accuracy level in (Delta a)/a, and will provide an independent measurement of lambda = G_A/G_V, the ratio of axial-vector to vector coupling constants of the nucleon. Nab also plans to perform the first ever measurement of 'b' in neutron decay, which will provide an independent limit on the tensor weak coupling.

  13. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Laboratory; Sisterson, DL [Argonne National Laboratory

    2015-10-01

    The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily-accessible, well-articulated estimate of ARM measurement uncertainty is needed.

  14. ANALYSIS OF UNCERTAINTY MEASUREMENT IN ATOMIC ABSORPTION SPECTROPHOTOMETER

    Directory of Open Access Journals (Sweden)

    NEHA S.MAHAJAN

    2012-05-01

    Full Text Available A spectrophotometer is a photometer that can measure intensity as a function of the light source wavelength. The important features of spectrophotometers are spectral bandwidth and linear range of absorption or reflectance measurement. Atomic absorption spectroscopy (AAS is a very common technique for detecting chemical composition of elements in metal and its alloy. It is very reliable and simple to use. Quality of result (accuracy depends on the uncertainty of measurement value of the test. If uncertainty of measurement is more there may be doubt of about the final result. The final result of Atomic Absorption Spectrophotometer gets affected by the number of parameters; we should take in to account will calculating the final result. This paper deal with the methodology of evaluating the uncertainty of measurement of chemical composition using AAS. The study is useful for quality of measurement equipment and testing process.

  15. Optical radiation measurements for photovoltaic applications: instrumentation uncertainty and performance

    Science.gov (United States)

    Myers, Daryl R.; Reda, Ibrahim; Wilcox, Stephen; Andreas, Afshin

    2004-11-01

    Evaluating the performance of photovoltaic (PV) devices in the laboratory and in the field requires accurate knowledge of the optical radiation stimulating the devices. We briefly describe the radiometric instrumentation used for characterizing broadband and spectral irradiance for PV applications. Spectral radiometric measurement systems are used to characterize solar simulators (continuous and pulsed, or flash sources) and natural sunlight. Broadband radiometers (pyranometers and pyrheliometers) are used to assess solar resources for renewable applications and develop and validate broadband solar radiation models for estimating system performance. We describe the sources and magnitudes of uncertainty associated with calibrations and measuremens using these instruments. The basic calibration and measurement uncertainty associated with this instrumentaion are based on the guidlines described in the International Standards Organization (ISO) and Bureau INternationale des Poids et Mesures (BIPM) Guide to Uncertainty in Measurement. The additional contributions to uncertainty arising from the uncertainty in characterization functions and correction schemes are discussed and ilustrated. Finally, empirical comparisons of several solar radiometer instrumentation sets illustrate that the best measurement accuracy for broadband radiation is on the order of 3%, and spectrally dependent uncertainty for spectroradiometer systems range from 4% in the visible to 8% to 10% in the ultraviolet and infrared.

  16. Measurement uncertainty in pharmaceutical analysis and its application

    Institute of Scientific and Technical Information of China (English)

    Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourençon

    2014-01-01

    The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products.

  17. Rough Operations and Uncertainty Measures on MV-Algebras

    OpenAIRE

    Maosen Xie

    2014-01-01

    We define a lower approximate operation and an upper approximate operation based on a partition on MV-algebras and discuss their properties. We then introduce a belief measure and a plausibility measure on MV-algebras and investigate the relationship between rough operations and uncertainty measures.

  18. Chapter 12: Uncertainty in measured water quality data

    Science.gov (United States)

    Water quality assessment, management, and regulation continue to rely on measured water quality data, in spite of advanced modeling capabilities. However, very little information is available on one very important component of the measured data - the inherent measurement uncertainty. Although all ...

  19. Teaching Scientific Measurement and Uncertainty in Elementary School

    Science.gov (United States)

    Munier, Valérie; Merle, Hélène; Brehelin, Danie

    2013-01-01

    The concept of measurement is fundamental in science. In order to be meaningful, the value of a measurement must be given with a certain level of uncertainty. In this paper we try to identify and develop the reasoning of young French pupils about measurement variability. In France, official instructions for elementary school thus argue for having…

  20. VSHOT measurement uncertainty and sensitivity study

    Energy Technology Data Exchange (ETDEWEB)

    Jones, S.A.; Gruetzner, J.K.; Houser, R.M.; Edgar, R.M. [Sandia National Labs., Albuquerque, NM (United States); Wendelin, T.J. [National Renewable Energy Lab., Golden, CO (United States)

    1997-08-01

    The Video Scanning Hartmann Optical Tester (VSHOT) is a slope-measuring tool for large, imprecise reflectors. It is a laser ray trace device developed to measure the optical quality of point-focus solar concentrating mirrors. A unique tool was needed because of the diverse geometry and very large size of solar concentrators, plus their large optical errors. To study the accuracy of VSHOT as well as its sensitivity to changes in test setup variables, a series of experiments were performed with a very precise, astronomical-grade mirror. The slope errors of the reference mirror were much smaller than the resolution of the VSHOT, so that any measured slope errors were caused by the instrument itself rather than the mirror. The VSHOT exceeded its accuracy goals by achieving about {+-}0.5% (68% confidence) error in the determination of focal length and {+-} 0.1 mrad (68% confidence) error in the determination of RMS slope error. Displacement of the test mirror from the optical axis caused the largest source of measured errors.

  1. Uncertainty Analysis in Humidity Measurements by the Psychrometer Method

    Science.gov (United States)

    Chen, Jiunyuan; Chen, Chiachung

    2017-01-01

    The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15–50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty. PMID:28216599

  2. [Estimation of uncertainty of measurement in clinical biochemistry].

    Science.gov (United States)

    Enea, Maria; Hristodorescu, Cristina; Schiriac, Corina; Morariu, Dana; Mutiu, Tr; Dumitriu, Irina; Gurzu, B

    2009-01-01

    The uncertainty of measurement (UM) or measurement uncertainty is known as the parameter associated with the result of a measurement. Repeated measurements usually reveal slightly different results for the same analyte, sometimes a little higher, sometimes a little lower, because the results of a measurement are depending not only by the analyte itself, but also, by a number of error factors that could give doubts about the estimate. The uncertainty of the measurement represent the quantitative, mathematically expression of this doubt. UM is a range of measured values which is probably to enclose the true value of the measured. Calculation of UM for all types of laboratories is regularized by the ISO Guide to the Expression of Uncertainty in Measurement (abbreviated GUM) and the SR ENV 13005 : 2003 (both recognized by European Accreditation). Even if the GUM rules about UM estimation are very strictly, the offering of the result together with UM will increase the confidence of customers (patients or physicians). In this study the authors are presenting the possibilities of UM assessing in labs from our country by using the data obtained in the procedures of methods validation, during the internal and external quality control.

  3. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    Science.gov (United States)

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  4. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  5. Ten methods for calculating the uncertainty of measurement.

    Science.gov (United States)

    Wallace, Jack

    2010-12-01

    While forensic laboratories are coming under increasing pressure to provide meaningful estimates of measurement uncertainty, there has been little discussion of this topic in the literature. This article summarizes ten bases for estimating this parameter: (1) proficiency tests; (2) readability limits; (3) independent reference materials; (4) operational limits applied during calibration; (5) expert judgment; (6) precision control samples without (6) and with (7) contributions from extramural sources of error; (8) error budgets; (9) historical performance; and (10) ruggedness tests. Based on the assumptions underlying each approach, the forensic community will need to apply a variety of discipline-specific approaches to arrive at satisfactory estimates of measurement uncertainty.

  6. Measuring uncertainty by extracting fuzzy rules using rough sets and extracting fuzzy rules under uncertainty and measuring definability using rough sets

    Science.gov (United States)

    Worm, Jeffrey A.; Culas, Donald E.

    1991-01-01

    Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.

  7. Attributes for Measuring Equity and Excellence in District Operation.

    Science.gov (United States)

    DeMoulin, Donald F.; Guyton, John W.

    In the quest for excellence, school districts have a variety of indicators or attributes available by which to gauge their progress. This model, used By the Equity and Excellence Research school districts in Mississippi, monitors achievement in relation to educational excellence. Team members established a list of attributes and various means of…

  8. Measurement uncertainty evaluation of conicity error inspected on CMM

    Science.gov (United States)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  9. Uncertainty in Terahertz Time-Domain Spectroscopy Measurement of Liquids

    Science.gov (United States)

    Yang, Fei; Liu, Liping; Song, Maojiang; Han, Feng; Shen, Li; Hu, Pengfei; Zhang, Fang

    2017-02-01

    Terahertz time-domain spectroscopy (THz-TDS) is a significant technique for characterizing materials as it allows fast and broadband measurement of optical constants in the THz regime. The measurement precision of the constants is highly influenced by the complicated measurement procedure and data processing. Taking THz transmission measurement of liquids into account, the sources of error existing in THz-TDS process are identified. The contributions of each source to the uncertainty of optical constants in THz-TDS process are formulated, with particular emphasis on the effect of multilayer reflections and plane wave assumption. As a consequence, an analytical model is proposed for uncertainty evaluation in a THz-TDS measurement of liquids. An actual experiment with a Di 2-Ethyl Hexyl Phthalate (DEHP) sample is carried out to show that the proposed model could be a basis to evaluate the measurement precision of optical constants of liquids.

  10. Measurement uncertainty analysis on laser tracker combined with articulated CMM

    Science.gov (United States)

    Zhao, Hui-ning; Yu, Lian-dong; Du, Yun; Zhang, Hai-yan

    2013-10-01

    The combined measurement technology plays an increasingly important role in the digitalized assembly. This paper introduces a combined measurement system consists of a Laser tracker and a FACMM,with the applications in the inspection of the position of the inner parts in a large-scale device. When these measurement instruments are combined, the resulting coordinate data set contains uncertainties that are a function of the base data sets and complex interactions between the measurement sets. Combined with the characteristics of Laser Tracker and Flexible Articulated Coordinate Measuring Machine (FACMM),Monte-Claro simulation mothed is employed in the uncertainty evaluation of combined measurement systems. A case study is given to demonstrate the practical applications of this research.

  11. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  12. A Method to Estimate Uncertainty in Radiometric Measurement Using the Guide to the Expression of Uncertainty in Measurement (GUM) Method; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.

    2015-03-01

    Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).

  13. Reliability and Validity of a Measure of Preschool Teachers' Attributions for Disruptive Behavior

    Science.gov (United States)

    Carter, Lauren M.; Williford, Amanda P.; LoCasale-Crouch, Jennifer

    2014-01-01

    Research Findings: This study examined the quality of teacher attributions for child disruptive behavior using a new measure, the Preschool Teaching Attributions measure. A sample of 153 early childhood teachers and 432 children participated. All teachers completed the behavior attributions measure, as well as measures regarding demographics,…

  14. Uncertainty of calorimeter measurements at NREL's high flux solar furnace

    Science.gov (United States)

    Bingham, C. E.

    1991-12-01

    The uncertainties of the calorimeter and concentration measurements at the High Flux Solar Furnace (HFSF) at the National Renewable Energy Laboratory (NREL) are discussed. Two calorimeter types have been used to date. One is an array of seven commercially available circular foil calorimeters (gardon or heat flux gages) for primary concentrator peak flux (up to 250 W/sq cm). The second is a cold-water calorimeter designed and built by the University of Chicago to measure the average exit power of the reflective compound parabolic secondary concentrator used at the HFSF (over 3.3 kW across a 1.6/sq cm) exit aperture, corresponding to a flux of about 2 kW/sq cm. This paper discussed the uncertainties of the calorimeter and pyrheliometer measurements and resulting concentration calculations. The measurement uncertainty analysis is performed according to the ASME/ANSI standard PTC 19.1 (1985). Random and bias errors for each portion of the measurement are analyzed. The results show that as either the power or the flux is reduced, the uncertainties increase. Another calorimeter is being designed for a new, refractive secondary which will use a refractive material to produce a higher average flux (5 kW/sq cm) than the reflective secondary. The new calorimeter will use a time derivative of the fluid temperature as a key measurement of the average power out of the secondary. A description of this calorimeter and test procedure is also presented, along with a pre-test estimate of major sources of uncertainty.

  15. Uncertainty of calorimeter measurements at NREL's high flux solar furnace

    Energy Technology Data Exchange (ETDEWEB)

    Bingham, C.E.

    1991-12-01

    The uncertainties of the calorimeter and concentration measurements at the High Flux Solar Furnace (HFSF) at the National Renewable Energy Laboratory (NREL) are discussed. Two calorimeter types have been used to date. One is an array of seven commercially available circular foil calorimeters (gardon or heat flux gages) for primary concentrator peak flux (up to 250 W/cm{sup 2}). The second is a cold-water calorimeter designed and built by the University of Chicago to measure the average exit power of the reflective compound parabolic secondary concentrator used at the HFSF (over 3.3 kW across a 1.6cm{sup {minus}2} exit aperture, corresponding to a flux of about 2 kW/cm{sup 2}). This paper discussed the uncertainties of the calorimeter and pyrheliometer measurements and resulting concentration calculations. The measurement uncertainty analysis is performed according to the ASME/ANSI standard PTC 19.1 (1985). Random and bias errors for each portion of the measurement are analyzed. The results show that as either the power or the flux is reduced, the uncertainties increase. Another calorimeter is being designed for a new, refractive secondary which will use a refractive material to produce a higher average flux (5 kW/cm{sup 2}) than the reflective secondary. The new calorimeter will use a time derivative of the fluid temperature as a key measurement of the average power out of the secondary. A description of this calorimeter and test procedure is also presented, along with a pre-test estimate of major sources of uncertainty. 8 refs., 4 figs., 3 tabs.

  16. Measurement uncertainty in colour characterization of printed textile materials

    Directory of Open Access Journals (Sweden)

    Neda Milić

    2011-11-01

    Full Text Available The subject of uncertainty of spectrophotometric measurement of printed textile materials is one of the majorunsolved technical problems in textile colourimetry today. Textile manufacturers are often trying to maintain colourdifference tolerances which are within the range or even less than the uncertainty of the measurement systemcontrolling them. In this paper, two commercial spectrophotometers with different measuring geometries (GretagMacbethEye-One Pro with 450/0° geometry and ChinSpec HP200 with d/8° geometry were comparativelyinvestigated in terms of measurement uncertainty in colour characterization of textile products. Results of the studyindicate that, the despite of different measuring geometry, instruments had the similar measurement repeatabilitybehaviour (repeatability of readings from different parts of the same sample in the case of used digitally printedpolyester materials. The important influence on measurement variability had the material preparation method (werethe materials triple folded, placed on a black backing or a white backing. On the other hand, instruments showeddifference concerning the inter-model agreement. Although this difference was not confirmed as significant withvisual assessment, observers evaluated the measurement readings from the Eye-One Pro spectrophotometer as moreaccurate colour appearance characterization of textile materials.

  17. BOOK REVIEW: Evaluating the Measurement Uncertainty: Fundamentals and practical guidance

    Science.gov (United States)

    Lira, Ignacio

    2003-08-01

    Evaluating the Measurement Uncertainty is a book written for anyone who makes and reports measurements. It attempts to fill the gaps in the ISO Guide to the Expression of Uncertainty in Measurement, or the GUM, and does a pretty thorough job. The GUM was written with the intent of being applicable by all metrologists, from the shop floor to the National Metrology Institute laboratory; however, the GUM has often been criticized for its lack of user-friendliness because it is primarily filled with statements, but with little explanation. Evaluating the Measurement Uncertainty gives lots of explanations. It is well written and makes use of many good figures and numerical examples. Also important, this book is written by a metrologist from a National Metrology Institute, and therefore up-to-date ISO rules, style conventions and definitions are correctly used and supported throughout. The author sticks very closely to the GUM in topical theme and with frequent reference, so readers who have not read GUM cover-to-cover may feel as if they are missing something. The first chapter consists of a reprinted lecture by T J Quinn, Director of the Bureau International des Poids et Mesures (BIPM), on the role of metrology in today's world. It is an interesting and informative essay that clearly outlines the importance of metrology in our modern society, and why accurate measurement capability, and by definition uncertainty evaluation, should be so important. Particularly interesting is the section on the need for accuracy rather than simply reproducibility. Evaluating the Measurement Uncertainty then begins at the beginning, with basic concepts and definitions. The third chapter carefully introduces the concept of standard uncertainty and includes many derivations and discussion of probability density functions. The author also touches on Monte Carlo methods, calibration correction quantities, acceptance intervals or guardbanding, and many other interesting cases. The book goes

  18. Therapeutic magnetic microcarriers characterization by measuring magnetophoretic attributes

    Science.gov (United States)

    Vidal Ibacache, Guillermo

    Micro/nano robots are considered a promising approach to conduct minimally invasive interventions. We have proposed to embed magnetic nanoparticles in therapeutic or diagnostic agents in order to magnetically control them. A modified clinical Magnetic Resonance Imaging (MRI) scanner is used to provide the driving force that allows these magnetically embedded microcarriers to navigate the vascular human network. By using specific Magnetic Resonance (MR) gradient sequences this method has been validated in previous research works. Magnetophoresis is the term used to describe the fact that a magnetic particle changes its trajectory under the influence of a magnetic force while being carried by a fluid flow. This movement depends on the particle's magnetic characteristics, the particle's geometric shape, the fluid flow's attributes and other factors. In our proposed method, magnetic microcarriers can be produced in several different ways, and so their response will differ to the same magnetic force and fluid flow conditions. The outcome of the therapeutic treatment using our method depends on the adequate selection of the therapeutic and/or diagnosis agents to be used. The selected therapeutic and/or diagnosis magnetic microcarrier also influences the selection of the MR gradient sequence that best fit for a given treatment. This master's thesis presents the design of a device intended to assess the magnetophoretic properties of magnetic therapeutic microcarriers and/or diagnostic agents. Such characterization is essential for determining the optimal sequences of magnetic gradients to deflect their trajectory through relatively complex vascular networks in order to reach a pre-defined target. A microfluidic device was fabricated to validate the design. Magnetophoretic velocities are measured and a simple tracking method is proposed. The preliminary experimental results indicate that, despite some limitations, the proposed technique has the potential to be appropriate

  19. Validation of a New Self-Report Measure of Parental Attributions

    Science.gov (United States)

    Snarr, Jeffery D.; Slep, Amy M. Smith; Grande, Vincent P.

    2009-01-01

    Attributional theory and empirical evidence suggest that a tendency to make stable, global self-causal attributions for undesirable events is associated with negative outcomes. However, existing self-report measures of parental attributions do not account for the possibility that dysfunctional parent-causal attributions for child misbehavior might…

  20. Error-disturbance uncertainty relations in neutron spin measurements

    Science.gov (United States)

    Sponar, Stephan

    2016-05-01

    Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.

  1. Uncertainty in measurement of protein circular dichroism spectra

    Science.gov (United States)

    Cox, Maurice G.; Ravi, Jascindra; Rakowska, Paulina D.; Knight, Alex E.

    2014-02-01

    Circular dichroism (CD) spectroscopy of proteins is widely used to measure protein secondary structure, and to detect changes in secondary and higher orders of structure, for applications in research and in the quality control of protein products such as biopharmaceuticals. However, objective comparison of spectra is challenging because of a limited quantitative understanding of the sources of error in the measurement. Statistical methods can be used for comparisons, but do not provide a mechanism for dealing with systematic, as well as random, errors. Here we present a measurement model for CD spectroscopy of proteins, incorporating the principal sources of uncertainty, and use the model in conjunction with experimental data to derive an uncertainty budget. We show how this approach could be used in practice for the objective comparison of spectra, and discuss the benefits and limitations of this strategy.

  2. [Evaluation of uncertainty in measurement of radiated disturbance and analysis of the result].

    Science.gov (United States)

    Wang, Weiming; Jiang, Sui

    2012-03-01

    This paper evaluates the uncertainty in the measurement of radiated disturbance by analyzing and calculating the components that influence the uncertainty. And the effectiveness of the uncertainty testing has been confirmed through the ability validation.

  3. Measurement of backscatter factor for diagnostic radiology: methodology and uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Rosado, P.H.G.; Nogueira, M.D.S.; Squair, P.L.; Da Silva, T.A. [Centro de Desenvolvimento da Tecnoogia Nuclear (CDTN/CNEN) 30123-970, Minas Gerais (Brazil)]. e-mail: phgr@cdtn.br

    2007-07-01

    Full text: Backscatter factors were experimentally determined for the diagnostic X-ray qualities recommended by the International Electrotechnical Commission (IEC) for primary beams (RQR). Harshaw LiF-1 100H thermoluminescent dosemeters used for determining the backscatter were calibrated against an ionization chamber traceable to the National Metrology Laboratory. A 300mm x 300mm x 150mm PMMA slab phantom was used for deep-doses measurements. To perform the in-phantom measurements, the dosemeters were placed in the central axis of the x-ray beam at five different depths d in the phantom (5, 10, 15, 25 and 35 mm) upstream the beam direction. The typical combined standard uncertainty of the backscatter factor value was 6%. The main sources of uncertainties were the calibration procedure, the TLD dosimetry and the use of deep-dose curves. (Author)

  4. Permissible limits for uncertainty of measurement in laboratory medicine.

    Science.gov (United States)

    Haeckel, Rainer; Wosniok, Werner; Gurr, Ebrhard; Peil, Burkhard

    2015-07-01

    The international standard ISO 15189 requires that medical laboratories estimate the uncertainty of their quantitative test results obtained from patients' specimens. The standard does not provide details how and within which limits the measurement uncertainty should be determined. The most common concept for establishing permissible uncertainty limits is to relate them on biological variation defining the rate of false positive results or to base the limits on the state-of-the-art. The state-of-the-art is usually derived from data provided by a group of selected medical laboratories. The approach on biological variation should be preferred because of its transparency and scientific base. Hitherto, all recommendations were based on a linear relationship between biological and analytical variation leading to limits which are sometimes too stringent or too permissive for routine testing in laboratory medicine. In contrast, the present proposal is based on a non-linear relationship between biological and analytical variation leading to more realistic limits. The proposed algorithms can be applied to all measurands and consider any quantity to be assured. The suggested approach tries to provide the above mentioned details and is a compromise between the biological variation concept, the GUM uncertainty model and the technical state-of-the-art.

  5. Significant Figures in Measurements with Uncertainty: A Working Criterion

    Science.gov (United States)

    Vilchis, Abraham

    2017-03-01

    Generally speaking, students have difficulty reporting out measurements and estimates of quantities used in the laboratory, and with handling the significant figures associated with them. When required to make calculation involving quantities with different numbers of significant figures, they have difficulty in assigning the corresponding digits to the final result. When in addition, the quantities have uncertainty, the operations entailed pose an even greater challenge. The article advocates for some working rules for students (and teachers) in an effort to combat this problem.

  6. Measurement Of Beer Taste Attributes Using An Electronic Tongue

    Science.gov (United States)

    Polshin, Evgeny; Rudnitskaya, Alisa; Kirsanov, Dmitry; Lammertyn, Jeroen; Nicolaï, Bart; Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip; Legin, Andrey

    2009-05-01

    The present work deals with the results of the application of an electronic tongue system as an analytical tool for rapid assessment of beer flavour. Fifty samples of Belgian and Dutch beers of different types, characterized with respect to sensory properties and bitterness, were analyzed using the electronic tongue (ET) based on potentiometric chemical sensors. The ET was capable of predicting 10 sensory attributes of beer with good precision including sweetness, sourness, intensity, body, etc., as well as the most important instrumental parameter—bitterness. These results show a good promise for further progressing of the ET as a new analytical technique for the fast assessment of taste attributes and bitterness, in particular, in the food and brewery industries.

  7. Measurement Uncertainty Analysis of the Strain Gauge Based Stabilographic Platform

    Directory of Open Access Journals (Sweden)

    Walendziuk Wojciech

    2014-08-01

    Full Text Available The present article describes constructing a stabilographic platform which records a standing patient’s deflection from their point of balance. The constructed device is composed of a toughen glass slab propped with 4 force sensors. Power transducers are connected to the measurement system based on a 24-bit ADC transducer which acquires slight body movements of a patient. The data is then transferred to the computer in real time and data analysis is conducted. The article explains the principle of operation as well as the algorithm of measurement uncertainty for the COP (Centre of Pressure surface (x, y.

  8. Extracting fuzzy rules under uncertainty and measuring definability using rough sets

    Science.gov (United States)

    Culas, Donald E.

    1991-01-01

    Although computers have come a long way since their invention, they are basically able to handle only crisp values at the hardware level. Unfortunately, the world we live in consists of problems which fail to fall into this category, i.e., uncertainty is all too common. A problem is looked at which involves uncertainty. To be specific, attributes are dealt with which are fuzzy sets. Under this condition, knowledge is acquired by looking at examples. In each example, a condition as well as a decision is made available. Based on the examples given, two sets of rules are extracted, certain and possible. Furthermore, measures are constructed of how much these rules are believed in, and finally, the decisions are defined as a function of the terms used in the conditions.

  9. Research on uncertainty in measurement assisted alignment in aircraft assembly

    Institute of Scientific and Technical Information of China (English)

    Chen Zhehan; Du Fuzhou; Tang Xiaoqing

    2013-01-01

    Operations in assembling and joining large size aircraft components are changed to novel digital and flexible ways by digital measurement assisted alignment. Positions and orientations (P&O) of aligned components are critical characters which assure geometrical positions and rela-tionships of those components. Therefore, evaluating the P&O of a component is considered nec-essary and critical for ensuring accuracy in aircraft assembly. Uncertainty of position and orientation (U-P&O), as a part of the evaluating result of P&O, needs to be given for ensuring the integrity and credibility of the result; furthermore, U-P&O is necessary for error tracing and quality evaluating of measurement assisted aircraft assembly. However, current research mainly focuses on the process integration of measurement with assembly, and usually ignores the uncer-tainty of measured result and its influence on quality evaluation. This paper focuses on the expres-sion, analysis, and application of U-P&O in measurement assisted alignment. The geometrical and algebraical connotations of U-P&O are presented. Then, an analytical algorithm for evaluating the multi-dimensional U-P&O is given, and the effect factors and characteristics of U-P&O are dis-cussed. Finally, U-P&O is used to evaluate alignment in aircraft assembly for quality evaluating and improving. Cases are introduced with the methodology.

  10. Uncertainty Estimation of Global Precipitation Measurement through Objective Validation Strategy

    Science.gov (United States)

    KIM, H.; Utsumi, N.; Seto, S.; Oki, T.

    2014-12-01

    Since Tropical Rainfall Measuring Mission (TRMM) has been launched in 1997 as the first satellite mission dedicated to measuring precipitation, the spatiotemporal gaps of precipitation observation have been filled significantly. On February 27th, 2014, Dual-frequency Precipitation Radar (DPR) satellite has been launched as a core observatory of Global Precipitation Measurement (GPM), an international multi-satellite mission aiming to provide the global three hourly map of rainfall and snowfall. In addition to Ku-band, Ka-band radar is newly equipped, and their combination is expected to introduce higher precision than the precipitation measurement of TRMM/PR. In this study, the GPM level-2 orbit products are evaluated comparing to various precipitation observations which include TRMM/PR, in-situ data, and ground radar. In the preliminary validation over intercross orbits of DPR and TRMM, Ku-band measurements in both satellites shows very close spatial pattern and intensity, and the DPR is capable to capture broader range of precipitation intensity than of the TRMM. Furthermore, we suggest a validation strategy based on 'objective classification' of background atmospheric mechanisms. The Japanese 55-year Reanalysis (JRA-55) and auxiliary datasets (e.g., tropical cyclone best track) is used to objectively determine the types of precipitation. Uncertainty of abovementioned precipitation products is quantified as their relative differences and characterized for different precipitation mechanism. Also, it is discussed how the uncertainty affects the synthesis of TRMM and GPM for a long-term satellite precipitation observation records which is internally consistent.

  11. Estimation of measuring uncertainty for optical micro-coordinate measuring machine

    Institute of Scientific and Technical Information of China (English)

    Kang Song(宋康); Zhuangde Jiang(蒋庄德)

    2004-01-01

    Based on the evaluation principle of the measuring uncertainty of the traditional coordinate measuring machine (CMM), the analysis and evaluation of the measuring uncertainty for optical micro-CMM have been made. Optical micro-CMM is an integrated measuring system with optical, mechanical, and electronic components, which may influence the measuring uncertainty of the optical micro-CMM. If the influence of laser speckle is taken into account, its longitudinal measuring uncertainty is 2.0 μm, otherwise it is 0.88 μm. It is proved that the estimation of the synthetic uncertainty for optical micro-CMM is correct and reliable by measuring the standard reference materials and simulating the influence of the diameter of laser beam. With Heisenberg's uncertainty principle and quantum mechanics theory, a method for improving the measuring accuracy of optical micro-CMM through adding a diaphragm in the receiving terminal of the light path was proposed, and the measuring results are verified by experiments.

  12. Quantifying Uncertainty in Brain Network Measures using Bayesian Connectomics

    Directory of Open Access Journals (Sweden)

    Ronald Johannes Janssen

    2014-10-01

    Full Text Available The wiring diagram of the human brain can be described in terms of graph measures that characterize structural regularities. These measures require an estimate of whole-brain structural connectivity for which one may resort to deterministic or thresholded probabilistic streamlining procedures. While these procedures have provided important insights about the characteristics of human brain networks, they ultimately rely on unwarranted assumptions such as those of noise-free data or the use of an arbitrary threshold. Therefore, resulting structural connectivity estimates as well as derived graph measures fail to fully take into account the inherent uncertainty in the structural estimate.In this paper, we illustrate an easy way of obtaining posterior distributions over graph metrics using Bayesian inference. It is shown that this posterior distribution can be used to quantify uncertainty about graph-theoretical measures at the single subject level, thereby providing a more nuanced view of the graph-theoretical properties of human brain connectivity. We refer to this model-based approach to connectivity analysis as Bayesian connectomics.

  13. Measurement Uncertainty Investigation in the Multi-probe OTA Setups

    DEFF Research Database (Denmark)

    Fan, Wei; Szini, Istvan Janos; Foegelle, M. D.

    2014-01-01

    Extensive efforts are underway to standardize over the air (OTA) testing of the multiple input multiple output (MIMO) capable terminals in COST IC1004, 3GPP RAN4 and CTIA. Due to the ability to reproduce realistic radio propagation environments inside the anechoic chamber and evaluate end user me...... chamber setup. This contribution presents the results of uncertainty measurements carried out in three practical multi-probe setups. Some sources of measurement errors, i.e. cable effect, cable termination, etc. are identified based on the measurement results.......Extensive efforts are underway to standardize over the air (OTA) testing of the multiple input multiple output (MIMO) capable terminals in COST IC1004, 3GPP RAN4 and CTIA. Due to the ability to reproduce realistic radio propagation environments inside the anechoic chamber and evaluate end user...... metrics in real world scenarios, the multi-probe based method has attracted huge interest from both industry and academia. This contribution attempts to identify some of the measurement uncertainties of the practical multi-probe setups and provide some guidance to establish the multi-probe anechoic...

  14. Measurement of nuclear activity with Ge detectors and its uncertainty

    CERN Document Server

    Cortes, C A P

    1999-01-01

    presented in the fifth chapter and they are applied to establish the optimum conditions for the measurement of the activity of a gamma transmitter isolated radioactive source with a spectrometer with germanium detector. (Author) The objective of this work is to analyse the influence magnitudes which affect the activity measurement of gamma transmitter isolated radioactive sources. They prepared by means of the gravimetric method, as well as, determining the uncertainty of such measurement when this is carried out with a gamma spectrometer system with a germanium detector. This work is developed in five chapters: In the first one, named Basic principles it is made a brief description about the meaning of the word Measurement and its implications and the necessaries concepts are presented which are used in this work. In the second chapter it is exposed the gravimetric method used for the manufacture of the gamma transmitter isolated radioactive sources, it is tackled the problem to determine the main influence ...

  15. Heisenberg uncertainty relation and statistical measures in the square well

    Directory of Open Access Journals (Sweden)

    Jaime Sañudo

    2012-07-01

    Full Text Available A non stationary state in the one-dimensional infinite square well formed by a combination of the ground state and the first excited one is considered. The statistical complexity and the Fisher-Shannon entropy in position and momentum are calculated with time for this system. These measures are compared with the Heisenberg uncertainty relation, $Delta xDelta p$. It is observed that the extreme values of $Delta xDelta p$ coincide in time with extreme values of the other two statistical magnitudes.

  16. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    Science.gov (United States)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  17. Estimation of measurement uncertainty caused by surface gradient for a white light interferometer.

    Science.gov (United States)

    Liu, Mingyu; Cheung, Chi Fai; Ren, Mingjun; Cheng, Ching-Hsiang

    2015-10-10

    Although the scanning white light interferometer can provide measurement results with subnanometer resolution, the measurement accuracy is far from perfect. The surface roughness and surface gradient have significant influence on the measurement uncertainty since the corresponding height differences within a single CCD pixel cannot be resolved. This paper presents an uncertainty estimation method for estimating the measurement uncertainty due to the surface gradient of the workpiece. The method is developed based on the mathematical expression of an uncertainty estimation model which is derived and verified through a series of experiments. The results show that there is a notable similarity between the predicted uncertainty from the uncertainty estimation model and the experimental measurement uncertainty, which demonstrates the effectiveness of the method. With the establishment of the proposed uncertainty estimation method, the uncertainty associated with the measurement result can be determined conveniently.

  18. Uncertainties in the national inventory of methane emissions from rice cultivation: field measurements and modeling approaches

    Science.gov (United States)

    Zhang, Wen; Sun, Wenjuan; Li, Tingting

    2017-01-01

    Uncertainties in national inventories originate from a variety of sources, including methodological failures, errors, and insufficiency of supporting data. In this study, we analyzed these sources and their contribution to uncertainty in the national inventory of rice paddy methane emissions in China and compared the differences in the approaches used (e.g., direct measurements, simple regressions, and more complicated models). For the 495 field measurements we collected from the scientific literature, the area-weighted 95 % CI (confidence interval) ranged from 13.7 to 1115.4 kg CH4 ha-1, and the histogram distribution of the measurements agreed well with parameterized gamma distributions. For the models, we compared the performance of methods of different complexity (i.e., the CH4MOD model, representing a complicated method, and two less complex statistical regression models taken from literature) to evaluate the uncertainties associated with model performance as well as the quality and accessibility of the regional datasets. Comparisons revealed that the CH4MOD model may perform worse than the comparatively simple regression models when no sufficient input data for the model is available. As simulated by CH4MOD with data of irrigation, organic matter incorporation, and soil properties of rice paddies, the modeling methane fluxes varied from 17.2 to 708.3 kg CH4 ha-1, covering 63 % of the range of the field measurements. When applying the modeling approach to the 10 km × 10 km gridded dataset of the model input variables, the within-grid variations, made via the Monte Carlo method, were found to be 81.2-95.5 % of the grid means. Upscaling the grid estimates to the national inventory, the total methane emission from the rice paddies was 6.43 (3.79-9.77) Tg. The fallacy of CH4MOD contributed 56.6 % of the total uncertainty, with the remaining 43.4 % being attributed to errors and the scarcity of the spatial datasets of the model inputs. Our analysis reveals the

  19. Probability and measurement uncertainty in physics a Bayesian primer

    CERN Document Server

    D'Agostini, Giulio

    1995-01-01

    Bayesian statistics is based on the subjective definition of probability as {\\it ``degree of belief''} and on Bayes' theorem, the basic tool for assigning probabilities to hypotheses combining {\\it a priori} judgements and experimental information. This was the original point of view of Bayes, Bernoulli, Gauss, Laplace, etc. and contrasts with later ``conventional'' (pseudo-)definitions of probabilities, which implicitly presuppose the concept of probability. These notes show that the Bayesian approach is the natural one for data analysis in the most general sense, and for assigning uncertainties to the results of physical measurements - while at the same time resolving philosophical aspects of the problems. The approach, although little known and usually misunderstood among the High Energy Physics community, has become the standard way of reasoning in several fields of research and has recently been adopted by the international metrology organizations in their recommendations for assessing measurement uncert...

  20. Measuring Leader Attributes in the Army Reconnaissance Course

    Science.gov (United States)

    2016-01-01

    accomplished through the use of existing observer-based measurement tools (Aptima’s SPOTLITE tool) and database software customized to the ARC’s use...report on student progression. This discussion yielded a set of requirements from which a series of static mockups were produced. The second...and third workshops focused on review and revision of the mockups generated after the first workshop. The second workshop featured an initial set of

  1. Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties

    Science.gov (United States)

    Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.

    2015-01-01

    For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.

  2. Uncertainties and re-analysis of glacier mass balance measurements

    Directory of Open Access Journals (Sweden)

    M. Zemp

    2013-03-01

    Full Text Available Glacier-wide mass balance has been measured for more than sixty years and is widely used as an indicator of climate change and to assess the glacier contribution to runoff and sea level rise. Until present, comprehensive uncertainty assessments have rarely been carried out and mass balance data have often been applied using rough error estimation or without error considerations. In this study, we propose a framework for re-analyzing glacier mass balance series including conceptual and statistical toolsets for assessment of random and systematic errors as well as for validation and calibration (if necessary of the glaciological with the geodetic balance results. We demonstrate the usefulness and limitations of the proposed scheme drawing on an analysis that comprises over 50 recording periods for a dozen glaciers and we make recommendations to investigators and users of glacier mass balance data. Reanalysis of glacier mass balance series needs to become a standard procedure for every monitoring programme to improve data quality and provide thorough uncertainty estimates.

  3. Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance

    Directory of Open Access Journals (Sweden)

    Anna Svirina

    2015-08-01

    Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.

  4. The contribution of sampling uncertainty to total measurement uncertainty in the enumeration of microorganisms in foods.

    Science.gov (United States)

    Jarvis, Basil; Hedges, Alan J; Corry, Janet E L

    2012-06-01

    Random samples of each of several food products were obtained from defined lots during processing or from retail outlets. The foods included raw milk (sampled on farm and from a bulk-milk tanker), sprouted seeds, raw minced meat, frozen de-shelled raw prawns, neck-flaps from raw chicken carcasses and ready-to-eat sandwiches. Duplicate sub-samples, generally of 100 g, were examined for aerobic colony counts; some were examined also for counts of presumptive Enterobacteriaceae and campylobacters. After log(10)-transformation, all sets of colony count data were evaluated for conformity with the normal distribution (ND) and analysed by standard ANOVA and a robust ANOVA to determine the relative contributions of the variance between and within samples to the overall variance. Sampling variance accounted for >50% of the reproducibility variance for the majority of foods examined; in many cases it exceeded 85%. We also used an iterative procedure of re-sampling without replacement to determine the effects of sample size (i.e. the number of samples) on the precision of the estimate of variance for one of the larger data sets. The variance of the repeatability and reproducibility variances depended on the number of replicate samples tested (n) in a manner that was characteristic of the underlying distribution. The results are discussed in relation to the use of measurement uncertainty in assessing compliance of results with microbiological criteria for foods.

  5. Measurements of fusion neutron yields by neutron activation technique: Uncertainty due to the uncertainty on activation cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Stankunas, Gediminas, E-mail: gediminas.stankunas@lei.lt [Lithuanian Energy Institute, Laboratory of Nuclear Installation Safety, Breslaujos str. 3, LT-44403 Kaunas (Lithuania); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Batistoni, Paola [ENEA, Via E. Fermi, 45, 00044 Frascati, Rome (Italy); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Sjöstrand, Henrik; Conroy, Sean [Department of Physics and Astronomy, Uppsala University, PO Box 516, SE-75120 Uppsala (Sweden); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2015-07-11

    The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.

  6. Past changes in the vertical distribution of ozone – Part 1: Measurement techniques, uncertainties and availability

    Directory of Open Access Journals (Sweden)

    B. Hassler

    2014-05-01

    Full Text Available Peak stratospheric chlorofluorocarbon (CFC and other ozone depleting substance (ODS concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP/World Meteorological Organization (WMO Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N Initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground and satellite based available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument. Archive location information for each data set is also given.

  7. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    Science.gov (United States)

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship

  8. Capital and time: uncertainty and qualitative measures of inequality.

    Science.gov (United States)

    Bear, Laura

    2014-12-01

    This review compares Piketty and Marx's approaches to capital and time in order to argue for the importance of qualitative measures of inequality. These latter measures emphasize varying experiences across classes and through history of uncertainty and insecurity. They explore how the social rhythms of capital profoundly affect the ability to plan a life-course. Quantitative measures such as those used by Piketty that focus on the amount of capital that accrues through time cannot capture such important phenomenon. This is especially because their calculations rest on absolute amounts of capital recorded in formal state statistics. Their limits are particularly revealed if we consider issues of: informal labour, social reproduction, and changing institutional forms of public debt. If we are to build the inter-disciplinary rapprochement between social science and economics that Piketty calls for it must be through asserting the value of qualitative measures of insecurity and its effects on decision making. These are important to track both at the macro-level of institutions and at the micro-level scale of human lives. It is, therefore, through emphasizing the existing strengths of both anthropology and history that we can meet Piketty's important challenge to make our scholarship relevant to current political and social debates.

  9. Uncertainty of measurement or of mean value for the reliable classification of contaminated land.

    Science.gov (United States)

    Boon, Katy A; Ramsey, Michael H

    2010-12-15

    Classification of contaminated land is important for risk assessment and so it is vital to understand and quantify all of the uncertainties that are involved in the assessment of contaminated land. This paper uses a case study to compare two methods for assessing the uncertainty in site investigations (uncertainty of individual measurements, including that from sampling, and uncertainty of the mean value of all measurements within an area) and how the different methods affect the decisions made about a site. Using the 'uncertainty of the mean value' there is shown to be no significant possibility of 'significant harm' under UK guidance at one particular test site, but if you consider the 'uncertainty of the measurements' a significant proportion (50%) of the site is shown to be possibly contaminated. This raises doubts as to whether the current method using 'uncertainty of the mean' is sufficiently robust, and suggests that 'uncertainty of measurement' information may be preferable, or at least beneficial when used in conjunction.

  10. Uncertainty of measurement and clinical value of semen analysis: has standardisation through professional guidelines helped or hindered progress?

    Science.gov (United States)

    Tomlinson, M J

    2016-09-01

    This article suggests that diagnostic semen analysis has no more clinical value today than it had 25-30 years ago, and both the confusion surrounding its evidence base (in terms of relationship with conception) and the low level of confidence in the clinical setting is attributable to an associated high level of 'uncertainty'. Consideration of the concept of measurement uncertainty is mandatory for medical laboratories applying for the ISO15189 standard. It is evident that the entire semen analysis process is prone to error every step from specimen collection to the reporting of results and serves to compound uncertainty associated with diagnosis or prognosis. Perceived adherence to published guidelines for the assessment of sperm concentration, motility and morphology does not guarantee a reliable and reproducible test result. Moreover, the high level of uncertainty associated with manual sperm motility and morphology can be attributed to subjectivity and lack a traceable standard. This article describes where and why uncertainty exists and suggests that semen analysis will continue to be of limited value until it is more adequately considered and addressed. Although professional guidelines for good practice have provided the foundations for testing procedures for many years, the risk in following rather prescriptive guidance to the letter is that unless they are based on an overwhelmingly firm evidence base, the quality of semen analysis will remain poor and the progress towards the development of more innovative methods for investigating male infertility will be slow.

  11. Uncertainty analysis of the magnetic field measurement by the translating coil method in axisymmetric magnets

    Science.gov (United States)

    Arpaia, Pasquale; De Vito, Luca; Kazazi, Mario

    2016-12-01

    In the uncertainty assessment of magnetic flux measurements in axially symmetric magnets by the translating coil method, the Guide to the Uncertainty in Measurement and its supplement cannot be applied: the voltage variation at the coil terminals, which is the actual measured quantity, affects the flux estimate and its uncertainty. In this paper, a particle filter, implementing a sequential Monte-Carlo method based on Bayesian inference, is applied. At this aim, the main uncertainty sources are analyzed and a model of the measurement process is defined. The results of the experimental validation point out the transport system and the acquisition system as the main contributions to the uncertainty budget.

  12. Range and number-of-levels effects in derived and stated measures of attribute importance

    NARCIS (Netherlands)

    Verlegh, PWJ; Schifferstein, HNJ; Wittink, DR

    2002-01-01

    We study how the range of variation and the number of ttribute levels affect five measures of attribute importance: full profile conjoint estimates, ranges in attribute level attractiveness ratings. regression coefficients. graded paired comparisons. and self-reported ratings, We find that all impor

  13. The concordance of directly and indirectly measured built environment attributes and physical activity adoption

    Directory of Open Access Journals (Sweden)

    O'Connor Daniel P

    2011-07-01

    Full Text Available Background Physical activity (PA adoption is essential for obesity prevention and control, yet ethnic minority women report lower levels of PA and are at higher risk for obesity and its comorbidities compared to Caucasians. Epidemiological studies and ecologic models of health behavior suggest that built environmental factors are associated with health behaviors like PA, but few studies have examined the association between built environment attribute concordance and PA, and no known studies have examined attribute concordance and PA adoption. Purpose The purpose of this study was to associate the degree of concordance between directly and indirectly measured built environment attributes with changes in PA over time among African American and Hispanic Latina women participating in a PA intervention. Method Women (N = 410 completed measures of PA at Time 1 (T1 and Time 2 (T2; environmental data collected at T1 were used to compute concordance between directly and indirectly measured built environment attributes. The association between changes in PA and the degree of concordance between each directly and indirectly measured environmental attribute was assessed using repeated measures analyses. Results There were no significant associations between built environment attribute concordance values and change in self-reported or objectively measured PA. Self-reported PA significantly increased over time (F(1,184 = 7.82, p = .006, but this increase did not vary by ethnicity or any built environment attribute concordance variable. Conclusions Built environment attribute concordance may not be associated with PA changes over time among minority women. In an effort to promote PA, investigators should clarify specific built environment attributes that are important for PA adoption and whether accurate perceptions of these attributes are necessary, particularly among the vulnerable population of minority women.

  14. Definition of free form object for low uncertainty measurements on cooridnate measuring machines

    DEFF Research Database (Denmark)

    Savio, Enrico; De Chiffre, Leonardo

    This report is made as a part of the project Easytrac, an EU project under the programme: Competitive and Sustainable Growth: Contract No: G6RD-CT-2000-00188, coordinated by UNIMETRIK S.A. (Spain). The project is concerned with low uncertainty calibrations on coordinate measuring machines....... The Centre for Geometrical Metrology (CGM) at the Technical University of Denmark takes care of free form measurements, in collaboration with DIMEG, University of Padova, Italy. The present report describes the free form objects selected for the investigations on the uncertainty assessment procedures....

  15. Lung Cancer Attributable to Indoor Radon Exposure in France: Impact of the Risk Models and Uncertainty Analysis

    OpenAIRE

    Catelinois, Olivier; Rogel, Agnès; Laurier, Dominique; Billon, Solenne; Hemon, Denis; VERGER, Pierre; Tirmarche, Margot

    2006-01-01

    Objective The inhalation of radon, a well-established human carcinogen, is the principal—and omnipresent—source of radioactivity exposure for the general population of most countries. Scientists have thus sought to assess the lung cancer risk associated with indoor radon. Our aim here is to assess this risk in France, using all available epidemiologic results and performing an uncertainty analysis. Methods We examined the exposure–response relations derived from cohorts of miners and from joi...

  16. Velocity Correction and Measurement Uncertainty Analysis of Light Screen Velocity Measuring Method

    Institute of Scientific and Technical Information of China (English)

    ZHENG Bin; ZUO Zhao-lu; HOU Wen

    2012-01-01

    Light screen velocity measuring method with unique advantages has been widely used in the velocity measurement of various moving bodies.For large air resistance and friction force which the big moving bodies are subjected to during the light screen velocity measuring,the principle of velocity correction was proposed and a velocity correction equation was derived.A light screen velocity measuring method was used to measure the velocity of big moving bodies which have complex velocity attenuation,and the better results were gained in practical tests.The measuring uncertainty after the velocity correction was calculated.

  17. Reconsideration of the Uncertainty Relations and Quantum Measurements

    Directory of Open Access Journals (Sweden)

    Dumitru S.

    2008-04-01

    Full Text Available Discussions on uncertainty relations (UR and quantum measurements (QMS persisted until nowadays in publications about quantum mechanics (QM. They originate mainly from the conventional interpretation of UR (CIUR. In the most of the QM literarure, it is underestimated the fact that, over the years, a lot of deficiencies regarding CIUR were signaled. As a rule the alluded deficiencies were remarked disparately and discussed as punctual and non-essential questions. Here we approach an investigation of the mentioned deficiencies collected in a conclusive ensemble. Subsequently we expose a reconsideration of the major problems referring to UR and QMS. We reveal that all the basic presumption of CIUR are troubled by insurmountable deficiencies which require the indubitable failure of CIUR and its necessary abandonment. Therefore the UR must be deprived of their statute of crucialpieces for physics. So, the aboriginal versions of UR appear as being in postures of either (i thought-experimental fictions or (ii simple QM formulae and, any other versions of them, have no connection with the QMS. Then the QMS must be viewed as an additional subject comparatively with the usual questions of QM. For a theoretical description of QMS we propose an information-transmission model, in which the quantum observables are considered as random variables. Our approach directs to natural solutions and simplifications for many problems regarding UR and QMS.

  18. Guitar Chords Classification Using Uncertainty Measurements of Frequency Bins

    Directory of Open Access Journals (Sweden)

    Jesus Guerrero-Turrubiates

    2015-01-01

    Full Text Available This paper presents a method to perform chord classification from recorded audio. The signal harmonics are obtained by using the Fast Fourier Transform, and timbral information is suppressed by spectral whitening. A multiple fundamental frequency estimation of whitened data is achieved by adding attenuated harmonics by a weighting function. This paper proposes a method that performs feature selection by using a thresholding of the uncertainty of all frequency bins. Those measurements under the threshold are removed from the signal in the frequency domain. This allows a reduction of 95.53% of the signal characteristics, and the other 4.47% of frequency bins are used as enhanced information for the classifier. An Artificial Neural Network was utilized to classify four types of chords: major, minor, major 7th, and minor 7th. Those, played in the twelve musical notes, give a total of 48 different chords. Two reference methods (based on Hidden Markov Models were compared with the method proposed in this paper by having the same database for the evaluation test. In most of the performed tests, the proposed method achieved a reasonably high performance, with an accuracy of 93%.

  19. Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method

    Science.gov (United States)

    Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko

    2016-09-01

    Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.

  20. Measurement Uncertainty Budget of the PMV Thermal Comfort Equation

    Science.gov (United States)

    Ekici, Can

    2016-05-01

    Fanger's predicted mean vote (PMV) equation is the result of the combined quantitative effects of the air temperature, mean radiant temperature, air velocity, humidity activity level and clothing thermal resistance. PMV is a mathematical model of thermal comfort which was developed by Fanger. The uncertainty budget of the PMV equation was developed according to GUM in this study. An example is given for the uncertainty model of PMV in the exemplification section of the study. Sensitivity coefficients were derived from the PMV equation. Uncertainty budgets can be seen in the tables. A mathematical model of the sensitivity coefficients of Ta, hc, T_{mrt}, T_{cl}, and Pa is given in this study. And the uncertainty budgets for hc, T_{cl}, and Pa are given in this study.

  1. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    Science.gov (United States)

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  2. Measurement Issues for Energy Efficient Commercial Buildings: Productivity and Performance Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.W.

    2002-05-16

    In previous reports, we have identified two potentially important issues, solutions to which would increase the attractiveness of DOE-developed technologies in commercial buildings energy systems. One issue concerns the fact that in addition to saving energy, many new technologies offer non-energy benefits that contribute to building productivity (firm profitability). The second issue is that new technologies are typically unproven in the eyes of decision makers and must bear risk premiums that offset cost advantages resulting from laboratory calculations. Even though a compelling case can be made for the importance of these issues, for building decision makers to incorporate them in business decisions and for DOE to use them in R&D program planning there must be robust empirical evidence of their existence and size. This paper investigates how such measurements could be made and offers recommendations as to preferred options. There is currently little systematic information on either of these concepts in the literature. Of the two there is somewhat more information on non-energy benefits, but little as regards office buildings. Office building productivity impacts can be observed casually, but must be estimated statistically, because buildings have many interacting attributes and observations based on direct behavior can easily confuse the process of attribution. For example, absenteeism can be easily observed. However, absenteeism may be down because a more healthy space conditioning system was put into place, because the weather was milder, or because firm policy regarding sick days had changed. There is also a general dearth of appropriate information for purposes of estimation. To overcome these difficulties, we propose developing a new data base and applying the technique of hedonic price analysis. This technique has been used extensively in the analysis of residential dwellings. There is also a literature on its application to commercial and industrial

  3. The uncertainty in physical measurements an introduction to data analysis in the physics laboratory

    CERN Document Server

    Fornasini, Paolo

    2008-01-01

    All measurements of physical quantities are affected by uncertainty. Understanding the origin of uncertainty, evaluating its extent and suitably taking it into account in data analysis is essential for assessing the degree of accuracy of phenomenological relationships and physical laws in both scientific research and technological applications. The Uncertainty in Physical Measurements: An Introduction to Data Analysis in the Physics Laboratory presents an introduction to uncertainty and to some of the most common procedures of data analysis. This book will serve the reader well by filling the gap between tutorial textbooks and highly specialized monographs. The book is divided into three parts. The first part is a phenomenological introduction to measurement and uncertainty: properties of instruments, different causes and corresponding expressions of uncertainty, histograms and distributions, and unified expression of uncertainty. The second part contains an introduction to probability theory, random variable...

  4. Total Measurement Uncertainty for the Plutonium Finishing Plant (PFP) Segmented Gamma Scan Assay System

    CERN Document Server

    Fazzari, D M

    2001-01-01

    This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...

  5. Uncertainty Analysis of Certified Photovoltaic Measurements at the National Renewable Energy Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Emery, K.

    2009-08-01

    Discusses NREL Photovoltaic Cell and Module Performance Characterization Group's procedures to achieve lowest practical uncertainty in measuring PV performance with respect to reference conditions.

  6. Task committee on experimental uncertainty and measurement errors in hydraulic engineering: An update

    Science.gov (United States)

    Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.

    2005-01-01

    As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.

  7. CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, Rolf; Paget, Maria L.; Richman, Eric E.

    2011-03-31

    With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for all equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate

  8. Developing scales measuring disorder-specific intolerance of uncertainty (DSIU) : a new perspective on transdiagnostic

    NARCIS (Netherlands)

    Thibodeau, Michel A; Carleton, R Nicholas; McEvoy, Peter M; Zvolensky, Michael J; Brandt, Charles P; Boelen, Paul A; Mahoney, Alison E J; Deacon, Brett J; Asmundson, Gordon J G

    2015-01-01

    Intolerance of uncertainty (IU) is a construct of growing prominence in literature on anxiety disorders and major depressive disorder. Existing measures of IU do not define the uncertainty that respondents perceive as distressing. To address this limitation, we developed eight scales measuring disor

  9. Applications of explicitly-incorporated/post-processing measurement uncertainty in watershed modeling

    Science.gov (United States)

    The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...

  10. Total uncertainty of low velocity thermal anemometers for measurement of indoor air movements

    DEFF Research Database (Denmark)

    Jørgensen, F.; Popiolek, Z.; Melikov, Arsen Krikor

    2004-01-01

    For a specific thermal anemometer with omnidirectional velocity sensor the expanded total uncertainty in measured mean velocity Û(Vmean) and the expanded total uncertainty in measured turbulence intensity Û(Tu) due to different error sources are estimated. The values are based on a previously dev...

  11. Estimation of measurement uncertainties in X-ray computed tomography metrology using the substitution method

    DEFF Research Database (Denmark)

    Müller, Pavel; Hiller, Jochen; Dai, Y.;

    2014-01-01

    This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...

  12. Measuring diversity in medical reports based on categorized attributes and international classification systems

    Directory of Open Access Journals (Sweden)

    Přečková Petra

    2012-04-01

    Full Text Available Abstract Background Narrative medical reports do not use standardized terminology and often bring insufficient information for statistical processing and medical decision making. Objectives of the paper are to propose a method for measuring diversity in medical reports written in any language, to compare diversities in narrative and structured medical reports and to map attributes and terms to selected classification systems. Methods A new method based on a general concept of f-diversity is proposed for measuring diversity of medical reports in any language. The method is based on categorized attributes recorded in narrative or structured medical reports and on international classification systems. Values of categories are expressed by terms. Using SNOMED CT and ICD 10 we are mapping attributes and terms to predefined codes. We use f-diversities of Gini-Simpson and Number of Categories types to compare diversities of narrative and structured medical reports. The comparison is based on attributes selected from the Minimal Data Model for Cardiology (MDMC. Results We compared diversities of 110 Czech narrative medical reports and 1119 Czech structured medical reports. Selected categorized attributes of MDMC had mostly different numbers of categories and used different terms in narrative and structured reports. We found more than 60% of MDMC attributes in SNOMED CT. We showed that attributes in narrative medical reports had greater diversity than the same attributes in structured medical reports. Further, we replaced each value of category (term used for attributes in narrative medical reports by the closest term and the category used in MDMC for structured medical reports. We found that relative Gini-Simpson diversities in structured medical reports were significantly smaller than those in narrative medical reports except the "Allergy" attribute. Conclusions Terminology in narrative medical reports is not standardized. Therefore it is nearly

  13. Measurement uncertainty of ester number, acid number and patchouli alcohol of patchouli oil produced in Yogyakarta

    Science.gov (United States)

    Istiningrum, Reni Banowati; Saepuloh, Azis; Jannah, Wirdatul; Aji, Didit Waskito

    2017-03-01

    Yogyakarta is one of patchouli oil distillation center in Indonesia. The quality of patchouli oil greatly affect its market price. Therefore, testing quality of patchouli oil parameters is an important concern, one through determination of the measurement uncertainty. This study will determine the measurement uncertainty of ester number, acid number and content of patchouli alcohol through a bottom up approach. Source contributor to measurement uncertainty of ester number is a mass of the sample, a blank and sample titration volume, the molar mass of KOH, HCl normality, and replication. While the source contributor of the measurement uncertainty of acid number is the mass of the sample, the sample titration volume, the relative mass and normality of KOH, and repetition. Determination of patchouli alcohol by Gas Chromatography considers the sources of measurement uncertainty only from repeatability because reference materials are not available.

  14. Comparison of different methods to estimate the uncertainty in composition measurement by chromatography.

    Science.gov (United States)

    Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo

    2015-06-01

    Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method.

  15. Uncertainty in Citizen Science observations: from measurement to user perception

    Science.gov (United States)

    Lahoz, William; Schneider, Philipp; Castell, Nuria

    2016-04-01

    Citizen Science activities concern general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. The advent of technologies such as the Internet and smartphones, and the growth in their usage, has significantly increased the potential benefits from Citizen Science activities. Citizen Science observations from low-cost sensors, smartphones and Citizen Observatories, provide a novel and recent development in platforms for observing the Earth System, with the opportunity to extend the range of observational platforms available to society to spatio-temporal scales (10-100s m; 1 hr or less) highly relevant to citizen needs. The potential value of Citizen Science is high, with applications in science, education, social aspects, and policy aspects, but this potential, particularly for citizens and policymakers, remains largely untapped. Key areas where Citizen Science data start to have demonstrable benefits include GEOSS Societal Benefit Areas such as Health and Weather. Citizen Science observations have many challenges, including simulation of smaller spatial scales, noisy data, combination with traditional observational methods (satellite and in situ data), and assessment, representation and visualization of uncertainty. Within these challenges, that of the assessment and representation of uncertainty and its communication to users is fundamental, as it provides qualitative and/or quantitative information that influences the belief users will have in environmental information. This presentation will discuss the challenges in assessment and representation of uncertainty in Citizen Science observations, its communication to users, including the use of visualization, and the perception of this uncertainty information by users of Citizen Science observations.

  16. Single Valued Neutrosophic Similarity Measures for Multiple Attribute Decision-Making

    Directory of Open Access Journals (Sweden)

    Jun Ye

    2014-03-01

    Full Text Available Similarity measures play an important role in data mining, pattern recognition, decision making, machine learning, image process etc. Then, single valued neutrosophic sets (SVNSs can describe and handle the indeterminate and inconsistent information, which fuzzy sets and intuitionistic fuzzy sets cannot describe and deal with. Therefore, the paper proposes new similarity meas-ures between SVNSs based on the minimum and maxi-mum operators. Then a multiple attribute decision-making method based on the weighted similarity measure of SVNSs is established in which attribute values for alternatives are represented by the form of single valued neutrosophic values (SVNVs and the attribute weights and the weights of the three independent elements (i.e., truthmembership degree, indeterminacy-membership degree, and falsity-membership degree in a SVNV are considered in the decision-making method. In the decision making, we utilize the single-valued neutrosophic weighted similarity measure between the ideal alternative and an alternative to rank the alternatives corresponding to the measure values and to select the most desirable one(s. Finally, two practical examples are provided to demonstrate the applications and effectiveness of the single valued neutrosophic multiple attribute decision-making method.

  17. Real Graphs from Real Data: Experiencing the Concepts of Measurement and Uncertainty

    Science.gov (United States)

    Farmer, Stuart

    2012-01-01

    A simple activity using cheap and readily available materials is described that allows students to experience first hand many of the concepts of measurement, uncertainty and graph drawing without laborious measuring or calculation. (Contains 9 figures.)

  18. The method of translation additive and multiplicative error in the instrumental component of the measurement uncertainty

    Science.gov (United States)

    Vasilevskyi, Olexander M.; Kucheruk, Volodymyr Y.; Bogachuk, Volodymyr V.; Gromaszek, Konrad; Wójcik, Waldemar; Smailova, Saule; Askarova, Nursanat

    2016-09-01

    The paper proposes a method of conversion additive and multiplicative errors, mathematical models are obtained by a Taylor expansion of the transformation equations used measuring instruments in the instrumental component of the measurement uncertainty.

  19. Uncertainty of measurement for large product verification: evaluation of large aero gas turbine engine datums

    Science.gov (United States)

    Muelaner, J. E.; Wang, Z.; Keogh, P. S.; Brownell, J.; Fisher, D.

    2016-11-01

    Understanding the uncertainty of dimensional measurements for large products such as aircraft, spacecraft and wind turbines is fundamental to improving efficiency in these products. Much work has been done to ascertain the uncertainty associated with the main types of instruments used, based on laser tracking and photogrammetry, and the propagation of this uncertainty through networked measurements. Unfortunately this is not sufficient to understand the combined uncertainty of industrial measurements, which include secondary tooling and datum structures used to locate the coordinate frame. This paper presents for the first time a complete evaluation of the uncertainty of large scale industrial measurement processes. Generic analysis and design rules are proven through uncertainty evaluation and optimization for the measurement of a large aero gas turbine engine. This shows how the instrument uncertainty can be considered to be negligible. Before optimization the dominant source of uncertainty was the tooling design, after optimization the dominant source was thermal expansion of the engine; meaning that no further improvement can be made without measurement in a temperature controlled environment. These results will have a significant impact on the ability of aircraft and wind turbines to improve efficiency and therefore reduce carbon emissions, as well as the improved reliability of these products.

  20. Quantifying measurement uncertainty in full-scale compost piles using organic micro-pollutant concentrations.

    Science.gov (United States)

    Sadef, Yumna; Poulsen, Tjalfe G; Bester, Kai

    2014-05-01

    Reductions in measurement uncertainty for organic micro-pollutant concentrations in full scale compost piles using comprehensive sampling and allowing equilibration time before sampling were quantified. Results showed that both application of a comprehensive sampling procedure (involving sample crushing) and allowing one week of equilibration time before sampling reduces measurement uncertainty by about 50%. Results further showed that for measurements carried out on samples collected using a comprehensive procedure, measurement uncertainty was associated exclusively with the analytic methods applied. Application of statistical analyses confirmed that these results were significant at the 95% confidence level. Overall implications of these results are (1) that it is possible to eliminate uncertainty associated with material inhomogeneity and (2) that in order to reduce uncertainty, sampling procedure is very important early in the composting process but less so later in the process.

  1. Centrality measures in networks based on nodes attributes, long-range interactions and group influence

    CERN Document Server

    Aleskerov, F; Shvydun, S

    2016-01-01

    We propose a new method for assessing agents influence in network structures, which takes into consideration nodes attributes, individual and group influences of nodes, and the intensity of interactions. This approach helps us to identify both explicit and hidden central elements which cannot be detected by classical centrality measures or other indices.

  2. Measurement uncertainties in the quantum formalism: quasi-realities of individual systems

    CERN Document Server

    Hofmann, Holger F

    2012-01-01

    The evaluation of uncertainties in quantum measurements is problematic since the correct value of an observable between state preparation and measurement is experimentally inaccessible. In Ozawa's formulation of uncertainty relations for quantum measurements, the correct value of an observable is represented by the operator of that observable. Here, I consider the implications of this operator-based assignment of values to individual systems and discuss the relation with weak values and weak measurement statistics.

  3. Measuring the performance of sensors that report uncertainty

    CERN Document Server

    Martin, A D; Parry, M

    2014-01-01

    We provide methods to validate and compare sensor outputs, or inference algorithms applied to sensor data, by adapting statistical scoring rules. The reported output should either be in the form of a prediction interval or of a parameter estimate with corresponding uncertainty. Using knowledge of the `true' parameter values, scoring rules provide a method of ranking different sensors or algorithms for accuracy and precision. As an example, we apply the scoring rules to the inferred masses of cattle from ground force data and draw conclusions on which rules are most meaningful and in which way.

  4. AVNG SYSTEM SOFTWARE - ATTRIBUTE VERIFICATION SYSTEM WITH INFORMATION BARRIERS FOR MASS AND ISOTOPICS MEASUREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Modenov, A; Bulatov, M; Livke, A; Morkin, A; Razinkov, S; Safronov, S; Elmont, T; Langner, D; MacArthur, D; Mayo, D; Smith, M; Luke, S J

    2005-06-10

    This report describes the software development for the plutonium attribute verification system--AVNG. A brief synopsis of the technical solution for the measurement system is presented. The main tasks for the software development that is underway are formulated. The development tasks are shown in software structural flowcharts, measurement system state diagram and a description of the software. The current status of the AVNG software development is elucidated.

  5. 论测量的不确定度%On uncertainty of measurement

    Institute of Scientific and Technical Information of China (English)

    孙建文

    2012-01-01

    简单介绍了"测量不确定度"的概念,结合相关规范提出了对测量不确定度的具体要求,阐述了如何确定测量不确定度,包括不确定度的来源识别,建立测量过程的模型,逐项评定标准不确定度等内容,以指导实践。%The paper introduces the concept of the uncertainty of the measurement,illustrates how to identify the uncertainty by combining relative regulation's requirement on the measurement uncertainty,including the origin identification of the uncertainty,the model for building the measurement process and the gradual evaluation standards of the uncertainty,so as to direct the practice.

  6. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  7. Liquid Crystal Thermography Measurement Uncertainty Analysis and Its Application to Turbulent Heat Transfer Measurements

    Directory of Open Access Journals (Sweden)

    Yu Rao

    2012-01-01

    Full Text Available Liquid crystal thermography is an advanced nonintrusive measurement technique, which is capable of providing a high-accuracy continuous temperature field measurement, especially for a complex structured heat transfer surface. The first part of the paper presents a comprehensive introduction to the thermochromic liquid crystal material and the related liquid crystal thermography technique. Then, based on the aythors' experiences in using the liquid crystal thermography for the heat transfer measurement, the parameters affecting the measurement uncertainty of the liquid crystal thermography have been discussed in detail through an experimental study. The final part of the paper describes the applications of the steady and transient liquid crystal thermography technique in the study of the turbulent flow heat transfer related to the aeroengine turbine blade cooling.

  8. Uncertainty analysis of steady state incident heat flux measurements in hydrocarbon fuel fires.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James Thomas

    2005-12-01

    The objective of this report is to develop uncertainty estimates for three heat flux measurement techniques used for the measurement of incident heat flux in a combined radiative and convective environment. This is related to the measurement of heat flux to objects placed inside hydrocarbon fuel (diesel, JP-8 jet fuel) fires, which is very difficult to make accurately (e.g., less than 10%). Three methods will be discussed: a Schmidt-Boelter heat flux gage; a calorimeter and inverse heat conduction method; and a thin plate and energy balance method. Steady state uncertainties were estimated for two types of fires (i.e., calm wind and high winds) at three times (early in the fire, late in the fire, and at an intermediate time). Results showed a large uncertainty for all three methods. Typical uncertainties for a Schmidt-Boelter gage ranged from {+-}23% for high wind fires to {+-}39% for low wind fires. For the calorimeter/inverse method the uncertainties were {+-}25% to {+-}40%. The thin plate/energy balance method the uncertainties ranged from {+-}21% to {+-}42%. The 23-39% uncertainties for the Schmidt-Boelter gage are much larger than the quoted uncertainty for a radiative only environment (i.e ., {+-}3%). This large difference is due to the convective contribution and because the gage sensitivities to radiative and convective environments are not equal. All these values are larger than desired, which suggests the need for improvements in heat flux measurements in fires.

  9. Measurement Uncertainty Evaluation in Dimensional X-ray Computed Tomography Using the Bootstrap Method

    DEFF Research Database (Denmark)

    Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio

    2014-01-01

    Industrial applications of computed tomography (CT) for dimensional metrology on various components are fast increasing, owing to a number of favorable properties such as capability of non-destructive internal measurements. Uncertainty evaluation is however more complex than in conventional...... the problem concerning measurement uncertainties was addressed with bootstrap and successfully applied to ball-bar CT measurements. Results obtained enabled extension to more complex shapes such as actual industrial components as we show by tests on a hollow cylinder workpiece....

  10. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    Science.gov (United States)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  11. Non-commitment Entropy: A Noval Modalityfor Uncertainty Measurement

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Three-way decision rule is the extension of traditional two-way decision. In the real environment, a decision maker is not easy tomake choice between acceptance and rejection for the uncertainly or incomplete information. In this case, people used to choosethree-way decision for the uncertain and high risky decision with extra but necessary cost. Meanwhile some general uncertaintymeasures are proposed by generalizing Shannon's entropy. The theory of information entropy makes the uncertainty measuresmore accuracy in boundary of three-way decision. In this paper, we propose several types of non-commitment entropy by usingthe relation of the 'third' decision-non-commitment, and employ the proposed model to evaluate the significance of theattributes for classification as well.

  12. Alternative risk measure for decision-making under uncertainty in water management

    Institute of Scientific and Technical Information of China (English)

    Yueping Xu; YeouKoung Tung; Jia Li; Shaofeng Niu

    2009-01-01

    Taking into account uncertainties in water management remains a challenge due to social,economic and environmental changes.Often,uncertainty creates difficulty in ranking or comparing multiple water management options,possibly leading to a wrong decision.In this paper,an alternative risk measure is proposed to facilitate the ranking or comparison of water management options under uncertainty by using the concepts of conditional expected loss and partial mean.This measure has the advantages of being more intuitive,general and could relate to many other measures of risk in the literature.The application of the risk measure is demonstrated through a case study for the evaluation of flood mitigation projects.The results show that the new measure is applicable to a general decisionmaking process under uncertainty.

  13. Role and Significance of Uncertainty in HV Measurement of Porcelain Insulators - a Case Study

    Science.gov (United States)

    Choudhary, Rahul Raj; Bhardwaj, Pooja; Dayama, Ravindra

    The improved safety margins in complex systems have attained prime importance in the modern scientific environment. The analysis and implementation of complex systems demands the well quantified accuracy and capability of measurements. Careful measurement with properly identified and quantified uncertainties could lead to the actual discovery which further may contribute for social developments. Unfortunately most scientists and students are passively taught to ignore the possibility of definition problems in the field of measurement and are often source of great arguments. Identifying this issue, ISO has initiated the standardisation of methodologies but its Guide to the Expression of Uncertainty in Measurement (GUM) has yet to be adapted seriously in tertiary education institutions for understanding the concept of uncertainty. The paper has been focused for understanding the concepts of measurement and uncertainty. Further a case study for calculation and quantification of UOM for high voltage electrical testing of ceramic insulators has been explained.

  14. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    Science.gov (United States)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-02-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students.

  15. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  16. Computer-assisted uncertainty assessment of k0-NAA measurement results

    Science.gov (United States)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  17. Technique for direct measurement of thermal conductivity of elastomers and a detailed uncertainty analysis

    Science.gov (United States)

    Ralphs, Matthew I.; Smith, Barton L.; Roberts, Nicholas A.

    2016-11-01

    High thermal conductivity thermal interface materials (TIMs) are needed to extend the life and performance of electronic circuits. A stepped bar apparatus system has been shown to work well for thermal resistance measurements with rigid materials, but most TIMs are elastic. This work studies the uncertainty of using a stepped bar apparatus to measure the thermal resistance and a tensile/compression testing machine to estimate the compressed thickness of polydimethylsiloxane for a measurement on the thermal conductivity, k eff. An a priori, zeroth order analysis is used to estimate the random uncertainty from the instrumentation; a first order analysis is used to estimate the statistical variation in samples; and an a posteriori, Nth order analysis is used to provide an overall uncertainty on k eff for this measurement method. Bias uncertainty in the thermocouples is found to be the largest single source of uncertainty. The a posteriori uncertainty of the proposed method is 6.5% relative uncertainty (68% confidence), but could be reduced through calibration and correlated biases in the temperature measurements.

  18. The MapCHECK Measurement Uncertainty function and its effect on planar dose pass rates.

    Science.gov (United States)

    Bailey, Daniel W; Spaans, Jason D; Kumaraswamy, Lalith K; Podgorsak, Matthew B

    2016-03-08

    Our study aimed to quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as measured and analyzed with the Sun Nuclear Corporation MapCHECK 2 array and its associated software. This optional function is toggled in the program preferences of the software (though turned on by default upon installation), and automatically increases the dose difference tolerance defined by the user for each planar dose comparison. Dose planes from 109 static-gantry IMRT fields and 40 VMAT arcs, of varying modulation complexity, were measured at 5 cm water-equivalent depth in the MapCHECK 2 diode array, and respective calculated dose planes were exported from a commercial treatment planning system. Planar dose comparison pass rates were calculated within the Sun Nuclear Corporation analytic software using a number of calculation parameters, including Measurement Uncertainty on and off. By varying the percent difference (%Diff) criterion for similar analyses performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with Measurement Uncertainty turned on. On average, the Measurement Uncertainty function increases the user-defined %Diff criterion by 0.8%-1.1% for 3%/3 mm analysis, depending on plan type and calculation technique (corresponding to an average change in pass rate of 1.0%-3.5%, and a maximum change of 8.7%). At the 2%/2 mm level, the Measurement Uncertainty function increases the user-defined %Diff criterion by 0.7%-1.2% on average, again depending on plan type and calculation technique (corresponding to an average change in pass rate of 3.5%-8.1%, and a maximum change of 14.2%). The largest increases in pass rate due to the Measurement Uncertainty function are generally seen with poorly matched planar dose comparisons, while the function has a notably smaller effect as pass rates approach 100%. The Measurement Uncertainty function, then, may

  19. Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling.

    Science.gov (United States)

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2007-12-01

    Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate

  20. Qualitative attributes and measurement properties of physical activity questionnaires: a checklist.

    Science.gov (United States)

    Terwee, Caroline B; Mokkink, Lidwine B; van Poppel, Mireille N M; Chinapaw, Mai J M; van Mechelen, Willem; de Vet, Henrica C W

    2010-07-01

    The large number of available physical activity (PA) questionnaires makes it difficult to select the most appropriate questionnaire for a certain purpose. This choice is further hampered by incomplete reporting and unsatisfactory evaluation of the content and measurement properties of the questionnaires. We provide a checklist for appraising the qualitative attributes and measurement properties of PA questionnaires, as a tool for selecting the most appropriate PA questionnaire for a certain target population and purpose. The checklist is called the Quality Assessment of Physical Activity Questionnaire (QAPAQ). This review is one of a group of four reviews in this issue of Sports Medicine on the content and measurement properties of physical activity questionnaires. Part 1 of the checklist can be used to appraise the qualitative attributes of PA questionnaires, i.e. the construct to be measured by the questionnaire, the purpose and target population for which it was developed, the format, interpretability and ease of use. Part 2 of the checklist can be used to appraise the measurement properties of a PA questionnaire, i.e. reliability (parameters of measurement error and reliability coefficients), validity (face and content validity, criterion validity and construct validity) and responsiveness. The QAPAQ can be used to select the most appropriate PA questionnaire for a certain purpose, but it can also be used to design or report a study on measurement properties of PA questionnaires. Using such a checklist will contribute to improving the assessment, reporting and appraisal of the content and measurement properties of PA questionnaires.

  1. Calculation and verification of blood ethanol measurement uncertainty for headspace gas chromatography.

    Science.gov (United States)

    Sklerov, Jason H; Couper, Fiona J

    2011-09-01

    An estimate was made of the measurement uncertainty for blood ethanol testing by headspace gas chromatography. While uncertainty often focuses on compliance to a single threshold level (0.08 g/100 mL), the existence of multiple thresholds, related to enhanced sentencing, subject age, or commercial vehicle licensure, necessitate the use of an estimate with validity across multiple specification levels. The uncertainty sources, in order of decreasing magnitude, were method reproducibility, linear calibration, recovery, calibrator preparation, reference material, and sample preparation. A large set of reproducibility data was evaluated (n = 15,433) in order to encompass measurement variability across multiple conditions, operators, instruments, concentrations and timeframes. The relative, combined standard uncertainty was calculated as ±2.7%, with an expanded uncertainty of ±8.2% (99.7% level of confidence, k = 3). Bias was separately evaluated through a recovery study using standard reference material from a national metrology institute. The uncertainty estimate was verified through the use of proficiency test (PT) results. Assigned values for PT results and their associated uncertainties were calculated as robust means (x*) and standard deviations (s*) of participant values. Performance scores demonstrated that the uncertainty estimate was appropriate across the full range of PT concentrations (0.010-0.370 g/100 mL). The use of PT data as an empirical estimate of uncertainty was not examined. Until providers of blood ethanol PT samples include details on how an assigned value is obtained along with its uncertainty and traceability, the use of PT data should be restricted to the role of verification of uncertainty estimates.

  2. Measurement uncertainty analysis of low-dose-rate prostate seed brachytherapy: post-implant dosimetry.

    Science.gov (United States)

    Gregory, Kent J; Pattison, John E; Bibbo, Giovanni

    2015-03-01

    The minimal dose covering 90 % of the prostate volume--D 90--is arguably the most important dosimetric parameter in low-dose-rate prostate seed brachytherapy. In this study an analysis of the measurement uncertainties in D 90 from low-dose-rate prostate seed brachytherapy was conducted for two common treatment procedures with two different post-implant dosimetry methods. The analysis was undertaken in order to determine the magnitude of D 90 uncertainty, how the magnitude of the uncertainty varied when D 90 was calculated using different dosimetry methods, and which factors were the major contributors to the uncertainty. The analysis considered the prostate as being homogeneous and tissue equivalent and made use of published data, as well as original data collected specifically for this analysis, and was performed according to the Guide to the expression of uncertainty in measurement (GUM). It was found that when prostate imaging and seed implantation were conducted in two separate sessions using only CT images for post-implant analysis, the expanded uncertainty in D 90 values were about 25 % at the 95 % confidence interval. When prostate imaging and seed implantation were conducted during a single session using CT and ultrasound images for post-implant analysis, the expanded uncertainty in D 90 values were about 33 %. Methods for reducing these uncertainty levels are discussed. It was found that variations in contouring the target tissue made the largest contribution to D 90 uncertainty, while the uncertainty in seed source strength made only a small contribution. It is important that clinicians appreciate the overall magnitude of D 90 uncertainty and understand the factors that affect it so that clinical decisions are soundly based, and resources are appropriately allocated.

  3. Truth Control of Dublicate Measurings under Uncertainty Conditions

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2010-01-01

    Full Text Available The paper considers a problem pertaining to truth control of dublicate measurings of technological variables under conditions of data deficit on characteristics of measuring facilities and controlled variables.The proposed control method improves a probability to detect and identify untrue dublicate measurings.

  4. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  5. Uncertainty measurement in the homogenization and sample reduction in the physical classification of rice and beans

    Directory of Open Access Journals (Sweden)

    Dieisson Pivoto

    2016-04-01

    Full Text Available ABSTRACT: The study aimed to i quantify the measurement uncertainty in the physical tests of rice and beans for a hypothetical defect, ii verify whether homogenization and sample reduction in the physical classification tests of rice and beans is effective to reduce the measurement uncertainty of the process and iii determine whether the increase in size of beans sample increases accuracy and reduces measurement uncertainty in a significant way. Hypothetical defects in rice and beans with different damage levels were simulated according to the testing methodology determined by the Normative Ruling of each product. The homogenization and sample reduction in the physical classification of rice and beans are not effective, transferring to the final test result a high measurement uncertainty. The sample size indicated by the Normative Ruling did not allow an appropriate homogenization and should be increased.

  6. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    CERN Document Server

    Xue, Zhenyu; Vlachos, Pavlos P

    2014-01-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations. In addition, the notion of a valid measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct ...

  7. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  8. On the Uncertainties of Stellar Mass Estimates via Colour Measurements

    CERN Document Server

    Roediger, Joel C

    2015-01-01

    Mass-to-light versus colour relations (MLCRs), derived from stellar population synthesis models, are widely used to estimate galaxy stellar masses (M$_*$) yet a detailed investigation of their inherent biases and limitations is still lacking. We quantify several potential sources of uncertainty, using optical and near-infrared (NIR) photometry for a representative sample of nearby galaxies from the Virgo cluster. Our method for combining multi-band photometry with MLCRs yields robust stellar masses, while errors in M$_*$ decrease as more bands are simultaneously considered. The prior assumptions in one's stellar population modelling dominate the error budget, creating a colour-dependent bias of up to 0.6 dex if NIR fluxes are used (0.3 dex otherwise). This matches the systematic errors associated with the method of spectral energy distribution (SED) fitting, indicating that MLCRs do not suffer from much additional bias. Moreover, MLCRs and SED fitting yield similar degrees of random error ($\\sim$0.1-0.14 dex)...

  9. Orientation Uncertainty of Structures Measured in Cored Boreholes: Methodology and Case Study of Swedish Crystalline Rock

    Science.gov (United States)

    Stigsson, Martin

    2016-11-01

    Many engineering applications in fractured crystalline rocks use measured orientations of structures such as rock contact and fractures, and lineated objects such as foliation and rock stress, mapped in boreholes as their foundation. Despite that these measurements are afflicted with uncertainties, very few attempts to quantify their magnitudes and effects on the inferred orientations have been reported. Only relying on the specification of tool imprecision may considerably underestimate the actual uncertainty space. The present work identifies nine sources of uncertainties, develops inference models of their magnitudes, and points out possible implications for the inference on orientation models and thereby effects on downstream models. The uncertainty analysis in this work builds on a unique data set from site investigations, performed by the Swedish Nuclear Fuel and Waste Management Co. (SKB). During these investigations, more than 70 boreholes with a maximum depth of 1 km were drilled in crystalline rock with a cumulative length of more than 34 km including almost 200,000 single fracture intercepts. The work presented, hence, relies on orientation of fractures. However, the techniques to infer the magnitude of orientation uncertainty may be applied to all types of structures and lineated objects in boreholes. The uncertainties are not solely detrimental, but can be valuable, provided that the reason for their presence is properly understood and the magnitudes correctly inferred. The main findings of this work are as follows: (1) knowledge of the orientation uncertainty is crucial in order to be able to infer correct orientation model and parameters coupled to the fracture sets; (2) it is important to perform multiple measurements to be able to infer the actual uncertainty instead of relying on the theoretical uncertainty provided by the manufacturers; (3) it is important to use the most appropriate tool for the prevailing circumstances; and (4) the single most

  10. Uncertainty of nitrate and sulphate measured by ion chromatography in wastewater samples

    OpenAIRE

    2012-01-01

    This paper presents an evaluation of measurement uncertainty regarding the results of anion (nitrate and sulphate) concentrations in wastewater. Anions were determined by ion chromatography (EN ISO 10304-2, 1996). The major sources of uncertainty regarding the measurement results were identified as contributions to linear least-square or weighted regression lines, precision, trueness, storage conditions, and sampling. Determination of anions in wastewater is very important for the purificatio...

  11. An uncertainty relation in terms of generalized metric adjusted skew information and correlation measure

    Science.gov (United States)

    Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang

    2016-09-01

    The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| {Corr}_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.

  12. An uncertainty relation in terms of generalized metric adjusted skew information and correlation measure

    Science.gov (United States)

    Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang

    2016-12-01

    The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| Corr_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.

  13. Uncertainties Associated with Flux Measurements Due to Heterogeneous Contaminant Distributions

    Science.gov (United States)

    Mass flux and mass discharge measurements at contaminated sites have been applied to assist with remedial management, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Extrapolation across un-sampled space is necessary when usi...

  14. Nacelle power curve measurement with spinner anemometer and uncertainty evaluation

    DEFF Research Database (Denmark)

    Demurtas, Giorgio; Friis Pedersen, Troels; Wagner, Rozenn

    2016-01-01

    The objective of this investigation was to verify the feasibility of using the spinner anemometer calibration and nacelle transfer function determined on one reference turbine, to assess the power performance of a second identical turbine. An experiment was set up with a met-mast in a position...... suitable to measure the power curve of the two wind turbines, both equipped with a spinner anemometer. An IEC 61400-12-1 compliant power curve was then measured for both turbines using the met-mast. The NTF (Nacelle Transfer Function) was measured on the reference turbine and then applied to both turbines...... to calculate the free wind speed. For each of the two wind turbines, the power curve (PC) was measured with the met-mast and the nacelle power curve (NPC) with the spinner anemometer. Four power curves (two PC and two NPC) were compared in terms of AEP (Annual Energy Production) for a Rayleigh wind speed...

  15. High speed railway environment safety evaluation based on measurement attribute recognition model.

    Science.gov (United States)

    Hu, Qizhou; Gao, Ningbo; Zhang, Bing

    2014-01-01

    In order to rationally evaluate the high speed railway operation safety level, the environmental safety evaluation index system of high speed railway should be well established by means of analyzing the impact mechanism of severe weather such as raining, thundering, lightning, earthquake, winding, and snowing. In addition to that, the attribute recognition will be identified to determine the similarity between samples and their corresponding attribute classes on the multidimensional space, which is on the basis of the Mahalanobis distance measurement function in terms of Mahalanobis distance with the characteristics of noncorrelation and nondimensionless influence. On top of the assumption, the high speed railway of China environment safety situation will be well elaborated by the suggested methods. The results from the detailed analysis show that the evaluation is basically matched up with the actual situation and could lay a scientific foundation for the high speed railway operation safety.

  16. High Speed Railway Environment Safety Evaluation Based on Measurement Attribute Recognition Model

    Directory of Open Access Journals (Sweden)

    Qizhou Hu

    2014-01-01

    Full Text Available In order to rationally evaluate the high speed railway operation safety level, the environmental safety evaluation index system of high speed railway should be well established by means of analyzing the impact mechanism of severe weather such as raining, thundering, lightning, earthquake, winding, and snowing. In addition to that, the attribute recognition will be identified to determine the similarity between samples and their corresponding attribute classes on the multidimensional space, which is on the basis of the Mahalanobis distance measurement function in terms of Mahalanobis distance with the characteristics of noncorrelation and nondimensionless influence. On top of the assumption, the high speed railway of China environment safety situation will be well elaborated by the suggested methods. The results from the detailed analysis show that the evaluation is basically matched up with the actual situation and could lay a scientific foundation for the high speed railway operation safety.

  17. Physicians' reactions to uncertainty in patient care. A new measure and new insights.

    Science.gov (United States)

    Gerrity, M S; DeVellis, R F; Earp, J A

    1990-08-01

    Although variations in physicians' practice patterns and use of resources are well documented, the reasons for these variations are less well understood. The uncertainty inherent in patient care may be one explanation. Existing measures of intolerance to uncertainty, developed in contexts outside of patient care, fail to explain these variations. To address this limitation, the Physicians' Reactions to Uncertainty scale was developed. A questionnaire containing an initial pool of 61 items was mailed to a random sample of 700 physicians in North Carolina and Oregon, stratified by specialty. The items covered nine areas of physicians' reactions to uncertainty derived from interviews with physicians and a definition of the concept affective reactions to uncertainity in patient care. Factor analysis of the 428 responses received yielded two primary factors that accounted for 58% of the common variance among the 61 items. Items with unambiguous loadings on these factors defined two reliable and readily interpretable subscales: Stress from Uncertainty (Cronbach's alpha = 0.90, 13 items) and Reluctance to Disclose Uncertainty to Others (alpha = 0.75, 9 items). By virtue of its clarity and good psychometric properties, this new measure promises insights into the role that uncertainty plays in physicians' resource utilization and practice patterns.

  18. Experimental test of error-disturbance uncertainty relations by weak measurement.

    Science.gov (United States)

    Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi

    2014-01-17

    We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.

  19. Evaluating the Sources of Uncertainties in the Measurements from Multiple Pyranometers and Pyrheliometers

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Dooraghi, Mike; Reda, Ibrahim; Kutchenreiter, Mark

    2017-03-13

    Traceable radiometric data sets are essential for validating climate models, validating satellite-based models for estimating solar resources, and validating solar radiation forecasts. The current state-of-the-art radiometers have uncertainties in the range from 2% - 5% and sometimes more [1]. The National Renewable Energy Laboratory (NREL) and other organizations are identifying uncertainties and improving radiometric measurement performance and developing a consensus methodology for acquiring radiometric data. This study analyzes the impact of differing specifications -- such as cosine response, thermal offset, spectral response, and others -- on the accuracy of radiometric data for various radiometers. The study will also provide insight on how to perform a measurement uncertainty analysis and how to reduce the impact of some of the sources of uncertainties.

  20. Application of the Monte Carlo Method for the Estimation of Uncertainty in Radiofrequency Field Spot Measurements

    Science.gov (United States)

    Iakovidis, S.; Apostolidis, C.; Samaras, T.

    2015-04-01

    The objective of the present work is the application of the Monte Carlo method (GUMS1) for evaluating uncertainty in electromagnetic field measurements and the comparison of the results with the ones obtained using the 'standard' method (GUM). In particular, the two methods are applied in order to evaluate the field measurement uncertainty using a frequency selective radiation meter and the Total Exposure Quotient (TEQ) uncertainty. Comparative results are presented in order to highlight cases where GUMS1 results deviate significantly from the ones obtained using GUM, such as the presence of a non-linear mathematical model connecting the inputs with the output quantity (case of the TEQ model) or the presence of a dominant nonnormal distribution of an input quantity (case of U-shaped mismatch uncertainty). The deviation of the results obtained from the two methods can even lead to different decisions regarding the conformance with the exposure reference levels.

  1. Uncertainty Reduction Via Parameter Design of A Fast Digital Integrator for Magnetic Field Measurement

    CERN Document Server

    Arpaia, P; Lucariello, G; Spiezia, G

    2007-01-01

    At European Centre of Nuclear Research (CERN), within the new Large Hadron Collider (LHC) project, measurements of magnetic flux with uncertainty of 10 ppm at a few of decades of Hz for several minutes are required. With this aim, a new Fast Digital Integrator (FDI) has been developed in cooperation with University of Sannio, Italy [1]. This paper deals with the final design tuning for achieving target uncertainty by means of experimental statistical parameter design.

  2. Evidential Reasoning-Based Approach for Multiple Attribute Decision Making Problems under Uncertainty%基于证据推理的不确定多属性决策方法

    Institute of Scientific and Technical Information of China (English)

    郭凯红; 李文立

    2012-01-01

    The previous study shows that the evidential reasoning algorithm is an effective and rational method to solve MADM (Multiple Attribute Decision Making) problems under uncertainty. However, the method has constraints that attribute weights should be deterministic and evaluation grades assessing basic attributes and general attributes should be consistent. However, these constraints are not relevant to the actual decision-making problems, especially for basic qualitative attributes. Existing subjective and objective methods have defect for basic attribute weights. Most methods assume that the grade is the same in order to evaluate grades based on basic and general attributes. Therefore, these methods are not effective to assist the decision making process and solve problems.In consideration of the weakness of previous study, this study proposes a method based on the evidential reasoning for MADM under uncertainty with the goal of extending evidential reasoning algorithm into a more general decision environment.The first part is to determine basic attribute weights. We first briefly introduce the evidential reasoning algorithm, discussing two major issues related to its effective application for MADM under uncertainty: (1) how to totally determine basic attribute weights, and (2) how to fully implement the transformation of distributed assessment from basic attributes into general attributes. In addition, we calculate basic attribute weights using the information entropy of decision matrix to solve the first problem. In the second part, we implement the equivalent transformation of distributed assessments from basic attributes into general attributes by assuming that evaluation grades assessing basic attributes and general attributes are not the same.We first fuzz the distributed assessments of basic attributes according to different data types of basic attribute values, and then implement, based on fuzzy transformation theory, the unified form of general distributed

  3. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results.

    Science.gov (United States)

    Strom, Daniel J; Joyce, Kevin E; MacLellan, Jay A; Watson, David J; Lynch, Timothy P; Antonio, Cheryl L; Birchall, Alan; Anderson, Kevin K; Zharov, Peter A

    2012-04-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results is negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty, a likelihood PDF for each individual's measurand is produced. Then using the same assumptions and all the data from the population of individuals, a prior PDF of measurands for the population is produced. The prior PDF is non-negative, and the average is equal to the average of the measurement results for the population. Using Bayes's theorem, posterior PDFs of each individual measurand are calculated. The uncertainty in these bayesian posterior PDFs appears to be all Berkson with no remaining classical component. The method is applied to baseline bioassay data from the Hanford site. The data include (90)Sr urinalysis measurements of 128 people, (137)Cs in vivo measurements of 5337 people and (239)Pu urinalysis measurements of 3270 people. The method produces excellent results for the (90)Sr and (137)Cs measurements, since there are non-zero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the (239)Pu measurements in non-occupationally exposed people because the population average is essentially zero relative to the sensitivity of the measurement technique. The method is shown to give results similar to

  4. Measuring the Flexural Strength of Ceramics at Elevated Temperatures – An Uncertainty Analysis

    Directory of Open Access Journals (Sweden)

    Štubňa I.

    2014-02-01

    Full Text Available The flexural mechanical strength was measured at room and elevated temperatures on green ceramic samples made from quartz electroporcelain mixture. An apparatus exploited the three-point-bending mechanical arrangement and a magazine for 10 samples that are favorable at the temperature measurements from 20 °C to 1000 °C. A description of the apparatus from the point of possible sources of uncertainties is also given. The uncertainty analysis taking into account thermal expansion of the sample and span between the supports is performed for 600 °C. Friction between the sample and supports as well as friction between mechanical parts of the apparatus is also considered. The value of the mechanical strength at the temperature of 600 °C is 13.23 ± 0.50 MPa, where the second term is an expanded standard uncertainty. Such an uncertainty is mostly caused by inhomogeneities in measured samples. The biggest part of the uncertainty arises from the repeatability of the loading force which reflects a scatter of the sample properties. The influence of the temperature on the uncertainty value is very small

  5. A novel method for importance measure analysis in the presence of epistemic and aleatory uncertainties

    Institute of Scientific and Technical Information of China (English)

    Ren Bo; Lu Zhenzhou; Zhou Changcong

    2014-01-01

    For structural systems with both epistemic and aleatory uncertainties, research on quantifying the contribution of the epistemic and aleatory uncertainties to the failure probability of the systems is conducted. Based on the method of separating epistemic and aleatory uncertainties in a variable, the core idea of the research is firstly to establish a novel deterministic transition model for auxiliary variables, distribution parameters, random variables, failure probability, then to propose the improved importance sampling (IS) to solve the transition model. Furthermore, the distribution parameters and auxiliary variables are sampled simultaneously and independently;therefore, the inefficient sampling procedure with an‘‘inner-loop’’ for epistemic uncertainty and an‘‘outer-loop’’ for aleatory uncertainty in traditional methods is avoided. Since the proposed method combines the fast convergence of the proper estimates and searches failure samples in the interesting regions with high efficiency, the proposed method is more efficient than traditional methods for the variance-based failure probability sensitivity measures in the presence of epistemic and aleatory uncertainties. Two numerical examples and one engineering example are introduced for demonstrating the efficiency and precision of the proposed method for structural systems with both epistemic and aleatory uncertainties.

  6. Coherent Uncertainty Analysis of Aerosol Measurements from Multiple Satellite Sensors

    Science.gov (United States)

    Petrenko, M.; Ichoku, C.

    2013-01-01

    Aerosol retrievals from multiple spaceborne sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS altogether, a total of 11 different aerosol products were comparatively analyzed using data collocated with ground-based aerosol observations from the Aerosol Robotic Network (AERONET) stations within the Multi-sensor Aerosol Products Sampling System (MAPSS, http://giovanni.gsfc.nasa.gov/mapss/ and http://giovanni.gsfc.nasa.gov/aerostat/). The analysis was performed by comparing quality-screened satellite aerosol optical depth or thickness (AOD or AOT) retrievals during 2006-2010 to available collocated AERONET measurements globally, regionally, and seasonally, and deriving a number of statistical measures of accuracy. We used a robust statistical approach to detect and remove possible outliers in the collocated data that can bias the results of the analysis. Overall, the proportion of outliers in each of the quality-screened AOD products was within 12%. Squared correlation coefficient (R2) values of the satellite AOD retrievals relative to AERONET exceeded 0.6, with R2 for most of the products exceeding 0.7 over land and 0.8 over ocean. Root mean square error (RMSE) values for most of the AOD products were within 0.15 over land and 0.09 over ocean. We have been able to generate global maps showing regions where the different products present advantages over the others, as well as the relative performance of each product over different landcover types. It was observed that while MODIS, MISR, and SeaWiFS provide accurate retrievals over most of the landcover types, multi-angle capabilities make MISR the only sensor to retrieve reliable AOD over barren and snow / ice surfaces. Likewise, active sensing enables CALIOP to retrieve aerosol properties over bright-surface shrublands more accurately than the other sensors, while POLDER, which is the only one of the sensors capable of measuring polarized aerosols, outperforms other sensors in

  7. Coherent uncertainty analysis of aerosol measurements from multiple satellite sensors

    Directory of Open Access Journals (Sweden)

    M. Petrenko

    2013-02-01

    Full Text Available Aerosol retrievals from multiple spaceborne sensors, including MODIS (on Terra and Aqua, MISR, OMI, POLDER, CALIOP, and SeaWiFS – altogether, a total of 11 different aerosol products – were comparatively analyzed using data collocated with ground-based aerosol observations from the Aerosol Robotic Network (AERONET stations within the Multi-sensor Aerosol Products Sampling System (MAPSS, http://giovanni.gsfc.nasa.gov/mapss/ and http://giovanni.gsfc.nasa.gov/aerostat/. The analysis was performed by comparing quality-screened satellite aerosol optical depth or thickness (AOD or AOT retrievals during 2006–2010 to available collocated AERONET measurements globally, regionally, and seasonally, and deriving a number of statistical measures of accuracy. We used a robust statistical approach to detect and remove possible outliers in the collocated data that can bias the results of the analysis. Overall, the proportion of outliers in each of the quality-screened AOD products was within 12%. Squared correlation coefficient (R2 values of the satellite AOD retrievals relative to AERONET exceeded 0.6, with R2 for most of the products exceeding 0.7 over land and 0.8 over ocean. Root mean square error (RMSE values for most of the AOD products were within 0.15 over land and 0.09 over ocean. We have been able to generate global maps showing regions where the different products present advantages over the others, as well as the relative performance of each product over different landcover types. It was observed that while MODIS, MISR, and SeaWiFS provide accurate retrievals over most of the landcover types, multi-angle capabilities make MISR the only sensor to retrieve reliable AOD over barren and snow/ice surfaces. Likewise, active sensing enables CALIOP to retrieve aerosol properties

  8. UNCERTAINTY OF MEASUREMENT- AN IMPORTANT INSTRUMENT TO EVALUATE THE QUALITY OF RESULTS IN FORMALDEHYDE TESTS

    Directory of Open Access Journals (Sweden)

    Emanuela BELDEAN

    2013-09-01

    Full Text Available The measurement uncertainty is a quantitativeindicator of the results quality, meaning how well theresult represents the value of the quantity beingmeasured. It is a relatively new concept and severalguides and regulations were elaborated in order tofacilitate laboratories to evaluate it. The uncertaintycomponents are quantified based on data fromrepeated measurements, previous measurements,knowledge of the equipment and experience of themeasurement. Uncertainity estimation involves arigorous evaluation of possible sources of uncertaintyand good knowledge of the measurement procedure.The case study presented in this paper revealed thebasic steps in uncertainty calculation for formaldehydeemission from wood-based panels determined by the1m3 Chamber method. Based on a very well definedIshikawa Diagram, the expanded uncertainty of0.044mg/m3 for k=2, at 95% confidence level wasestablished.

  9. Invited Article: Concepts and tools for the evaluation of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Iyer, Hari K.

    2017-01-01

    Measurements involve comparisons of measured values with reference values traceable to measurement standards and are made to support decision-making. While the conventional definition of measurement focuses on quantitative properties (including ordinal properties), we adopt a broader view and entertain the possibility of regarding qualitative properties also as legitimate targets for measurement. A measurement result comprises the following: (i) a value that has been assigned to a property based on information derived from an experiment or computation, possibly also including information derived from other sources, and (ii) a characterization of the margin of doubt that remains about the true value of the property after taking that information into account. Measurement uncertainty is this margin of doubt, and it can be characterized by a probability distribution on the set of possible values of the property of interest. Mathematical or statistical models enable the quantification of measurement uncertainty and underlie the varied collection of methods available for uncertainty evaluation. Some of these methods have been in use for over a century (for example, as introduced by Gauss for the combination of mutually inconsistent observations or for the propagation of "errors"), while others are of fairly recent vintage (for example, Monte Carlo methods including those that involve Markov Chain Monte Carlo sampling). This contribution reviews the concepts, models, methods, and computations that are commonly used for the evaluation of measurement uncertainty, and illustrates their application in realistic examples drawn from multiple areas of science and technology, aiming to serve as a general, widely accessible reference.

  10. Progress of the AVNG System - Attribute Verification System with Information Barriers for Mass Isotopics Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Budnikov, D. (Dmitry); Bulatov, M. (Mikhail); Jarikhine, I. (Igor); Lebedev, B. (Boris); Livke, A. (Alexander); Modenov, A.; Morkin, A. (Anton); Razinkov, S. (Sergei); Tsaregorodtsev, D. (Dmitry); Vlokh, A. (Andrey); Yakovleva, S. (Svetlana); Elmont, T. H. (Timothy H.); Langner, D. C. (Diana C.); MacArthur, D. W. (Duncan W.); Mayo, D. R. (Douglas R.); Smith, M. K. (Morag K.); Luke, S. J. (S. John)

    2005-01-01

    An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency @ 1332 keV 50%) and digital gamma-ray spectrometer DSPEC{sup PLUS}. The neutron multiplicity counter is a three ring counter with 164 {sup 3}He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs.

  11. PROGRESS OF THE AVNG SYSTEM - ATTRIBUTE VERIFICATION SYSTEM WITH INFORMATION BARRIERS FOR MASS AND ISOTOPICS MEASUREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Budnikov, D; Bulatov, M; Jarikhine, I; Lebedev, B; Livke, A; Modenov, A; Morkin, A; Razinkov, S; Safronov, S; Tsaregorodtsev, D; Vlokh, A; Yakovleva, S; Elmont, T; Langner, D; MacArthur, D; Mayo, D; Smith, M; Luke, S J

    2005-05-27

    An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency {at} 1332 keV 50%) and digital gamma-ray spectrometer DSPEC{sup PLUS}. The neutron multiplicity counter is a three ring counter with 164 {sup 3}He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs.

  12. Screening-level estimates of mass discharge uncertainty from point measurement methods

    Science.gov (United States)

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  13. Experimental and Measurement Uncertainty Associated with Characterizing Slurry Mixing Performance of Pulsating Jets at Multiple Scales

    Energy Technology Data Exchange (ETDEWEB)

    Bamberger, Judith A.; Piepel, Gregory F.; Enderlin, Carl W.; Amidan, Brett G.; Heredia-Langner, Alejandro

    2015-09-10

    Understanding how uncertainty manifests itself in complex experiments is important for developing the testing protocol and interpreting the experimental results. This paper describes experimental and measurement uncertainties, and how they can depend on the order of performing experimental tests. Experiments with pulse-jet mixers in tanks at three scales were conducted to characterize the performance of transient-developing periodic flows in Newtonian slurries. Other test parameters included the simulant, solids concentration, and nozzle exit velocity. Critical suspension velocity and cloud height were the metrics used to characterize Newtonian slurry flow associated with mobilization and mixing. During testing, near-replicate and near-repeat tests were conducted. The experimental results were used to quantify the combined experimental and measurement uncertainties using standard deviations and percent relative standard deviations (%RSD) The uncertainties in critical suspension velocity and cloud height tend to increase with the values of these responses. Hence, the %RSD values are the more appropriate summary measure of near-replicate testing and measurement uncertainty.

  14. Estimating the Uncertainty of Tensile Strength Measurement for A Photocured Material Produced by Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Adamczak Stanisław

    2014-08-01

    Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.

  15. Enhancing the ecological validity of the Beads Task as a behavioral measure of intolerance of uncertainty.

    Science.gov (United States)

    Jacoby, Ryan J; Abramowitz, Jonathan S; Reuman, Lillian; Blakey, Shannon M

    2016-06-01

    To broaden the measurement of intolerance of uncertainty (IU) beyond self-report methods, recent research has examined the Beads Task as a behavioral measure of IU. In the present study, we enhanced this task to increase its ecological validity by maximizing decisional uncertainty and the importance of a correct response. Undergraduate participants (n=102) completed the Beads Task with instructions that they would complete the Cold Pressor Task (CPT) if they answered incorrectly. As hypothesized, baseline CPT endurance time and self-reported pain level were weakly associated with later Beads Task distress during the decision-making process. Furthermore, in vivo Beads Task distress was associated with self-report inhibitory IU, which measures avoidance and paralysis in the face of uncertainty, but not with prospective IU, perfectionism, or general psychological distress after making statistical adjustments for multiple comparisons. Comparisons to previous work using the Beads Task, clinical implications, and avenues for future research are discussed.

  16. Multi-attribute integrated measurement of node importance in complex networks

    Science.gov (United States)

    Wang, Shibo; Zhao, Jinlou

    2015-11-01

    The measure of node importance in complex networks is very important to the research of networks stability and robustness; it also can ensure the security of the whole network. Most researchers have used a single indicator to measure the networks node importance, so that the obtained measurement results only reflect certain aspects of the networks with a loss of information. Meanwhile, because of the difference of networks topology, the nodes' importance should be described by combining the character of the networks topology. Most of the existing evaluation algorithms cannot completely reflect the circumstances of complex networks, so this paper takes into account the degree of centrality, the relative closeness centrality, clustering coefficient, and topology potential and raises an integrated measuring method to measure the nodes' importance. This method can reflect nodes' internal and outside attributes and eliminate the influence of network structure on the node importance. The experiments of karate network and dolphin network show that networks topology structure integrated measure has smaller range of metrical result than a single indicator and more universal. Experiments show that attacking the North American power grid and the Internet network with the method has a faster convergence speed than other methods.

  17. A generalized measurement model to quantify health: the multi-attribute preference response model.

    Directory of Open Access Journals (Sweden)

    Paul F M Krabbe

    Full Text Available After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.

  18. Uncertainty of pin height measurement for the determination of wear in pin-on-plate test

    DEFF Research Database (Denmark)

    Drago, Nicola; De Chiffre, Leonardo; Poulios, Konstantinos

    2014-01-01

    machine (CMM), achieving an expanded measurement uncertainty (k = 2) better than 1 mm. A simple dedicated fixture adaptable to workshop environment was developed and its metrological capability investigated, estimating an average uncertainty of measurement in the order of 5 mm (k = 2). Fixture......The paper concerns measurement of pin height for the determination of wear in a pin-on-plate (POP) or pin-on-disc (POD) test, where a pin is mounted on a holder that can be fixed on the test rig and removed for measurements. The amount of wear is assessed as difference of pin height before...... and after the test, using the distance between holder plane and pin friction plane as measurand. A series of measurements were performed in connection with POP testing of different friction material pins mounted on an aluminium holder. Pin height measurements were carried out on a coordinate measuring...

  19. Uncertainties of size measurements in electron microscopy characterization of nanomaterials in foods

    DEFF Research Database (Denmark)

    Dudkiewicz, Agnieszka; Boxall, Alistair B. A.; Chaudhry, Qasim;

    2015-01-01

    Electron microscopy is a recognized standard tool for nanomaterial characterization, and recommended by the European Food Safety Authority for the size measurement of nanomaterials in food. Despite this, little data have been published assessing the reliability of the method, especially for size...... measurement of nanomaterials characterized by a broad size distribution and/or added to food matrices. This study is a thorough investigation of the measurement uncertainty when applying electron microscopy for size measurement of engineered nanomaterials in foods. Our results show that the number of measured...... particles was only a minor source of measurement uncertainty for nanomaterials in food, compared to the combined influence of sampling, sample preparation prior to imaging and the image analysis. The main conclusion is that to improve the measurement reliability, care should be taken to consider...

  20. Modelling and Measurement Uncertainty Estimation for Integrated AFM-CMM Instrument

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Bariani, Paolo; De Chiffre, Leonardo

    2005-01-01

    This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows...... compensation of such errors. A geometrical model of the instrument was produced, describing the interaction between AFM and CMM systematic errors. The model parameters were quantified through calibration, and the model used for establishing an optimised measurement procedure for surface mapping. A maximum...... uncertainty of 0.8% was achieved for the case of surface mapping of 1.2*1.2 mm2 consisting of 49 single AFM scanned areas....

  1. Uncertainty analysis of signal deconvolution using a measured instrument response function

    Science.gov (United States)

    Hartouni, E. P.; Beeman, B.; Caggiano, J. A.; Cerjan, C.; Eckart, M. J.; Grim, G. P.; Hatarik, R.; Moore, A. S.; Munro, D. H.; Phillips, T.; Sayre, D. B.

    2016-11-01

    A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). In the case investigated here, the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to determine the uncertainty estimate of the physical model's parameters. We apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimum physical parameters.

  2. Accounting for sensor calibration, data validation, measurement and sampling uncertainties in monitoring urban drainage systems.

    Science.gov (United States)

    Bertrand-Krajewski, J L; Bardin, J P; Mourad, M; Béranger, Y

    2003-01-01

    Assessing the functioning and the performance of urban drainage systems on both rainfall event and yearly time scales is usually based on online measurements of flow rates and on samples of influent effluent for some rainfall events per year. In order to draw pertinent scientific and operational conclusions from the measurement results, it is absolutely necessary to use appropriate methods and techniques in order to i) calibrate sensors and analytical methods, ii) validate raw data, iii) evaluate measurement uncertainties, iv) evaluate the number of rainfall events to sample per year in order to determine performance indicator with a given uncertainty. Based an previous work, the paper gives a synthetic review of required and techniques, and illustrates their application to storage and settling tanks. Experiments show that, controlled and careful experimental conditions, relative uncertainties are about 20% for flow rates in sewer pipes, 6-10% for volumes, 25-35% for TSS concentrations and loads, and 18-276% for TSS removal rates. In order to evaluate the annual pollutant interception efficiency of storage and settling tanks with a given uncertainty, efforts should first be devoted to decrease the sampling uncertainty by increasing the number of sampled events.

  3. Absolute frequency measurement with uncertainty below $1\\times 10^{-15}$ using International Atomic Time

    CERN Document Server

    Hachisu, Hidekazu; Ido, Tetsuya

    2016-01-01

    The absolute frequency of the $^{87}{\\rm Sr}$ clock transition measured in 2015 was reevaluated using an improved frequency link to the SI second. The scale interval of International Atomic Time (TAI) that we used as the reference was calibrated for an evaluation interval of five days instead of the conventional interval of one month which is regularly employed in Circular T. The calibration on a five-day basis removed the uncertainty in assimilating the TAI scale of the five-day mean to that of the one-month mean. The reevaluation resulted in the total uncertainty of $10^{-16}$ level for the first time without local cesium fountains. Since there are presumably no correlations among systematic shifts of cesium fountains worldwide, the measurement is not limited by the systematic uncertainty of a specific primary frequency standard.

  4. Absolute frequency measurement with uncertainty below 1× 10^{-15} using International Atomic Time

    Science.gov (United States)

    Hachisu, Hidekazu; Petit, Gérard; Ido, Tetsuya

    2017-01-01

    The absolute frequency of the ^{87}Sr clock transition measured in 2015 (Jpn J Appl Phys 54:112401, 2015) was reevaluated using an improved frequency link to the SI second. The scale interval of International Atomic Time (TAI) that we used as the reference was calibrated for an evaluation interval of 5 days instead of the conventional interval of 1 month which is regularly employed in Circular T. The calibration on a 5-day basis removed the uncertainty in assimilating the TAI scale of the 5-day mean to that of the 1-month mean. The reevaluation resulted in the total uncertainty of 10^{-16} level for the first time without local cesium fountains. Since there are presumably no correlations among systematic shifts of cesium fountains worldwide, the measurement is not limited by the systematic uncertainty of a specific primary frequency standard.

  5. Measuring the perceived uncertainty of scientific evidence and its relationship to engagement with science.

    Science.gov (United States)

    Retzbach, Joachim; Otto, Lukas; Maier, Michaela

    2016-08-01

    Many scholars have argued for the need to communicate openly not only scientific successes to the public but also limitations, such as the tentativeness of research findings, in order to enhance public trust and engagement. Yet, it has not been quantitatively assessed how the perception of scientific uncertainties relates to engagement with science on an individual level. In this article, we report the development and testing of a new questionnaire in English and German measuring the perceived uncertainty of scientific evidence. Results indicate that the scale is reliable and valid in both language versions and that its two subscales are differentially related to measures of engagement: Science-friendly attitudes were positively related only to 'subjectively' perceived uncertainty, whereas interest in science as well as behavioural engagement actions and intentions were largely uncorrelated. We conclude that perceiving scientific knowledge to be uncertain is only weakly, but positively related to engagement with science.

  6. A.c. Power Measurement Using Power Analyzer Associated with External Transducers. Accuracy and Uncertainty Evaluation

    Directory of Open Access Journals (Sweden)

    Marinel Popescu

    2014-09-01

    Full Text Available The development of the digital signal processors and their implementation in measuring technique has led to the manufacturing of power analyzers used as multifunction meters in industry, automation, tests and laboratory activities, monitoring and control of processes, etc. The parameters of a three-phase system can be known if the phase currents, the phase voltages and the phase difference between them can be known.A power analyzer has six inputs for currents and voltages measuring signals. The paper presents a method of determination of errors and uncertainties of electrical quantities measurement using a power analyzer associated with external transducers. The best estimation of measured quantity and uncertainty of measurement are used to report the result of measurement process.

  7. Uncertainty evaluation in the measurement of power frequency electric and magnetic fields from AC overhead power lines.

    Science.gov (United States)

    Ztoupis, I N; Gonos, I F; Stathopulos, I A

    2013-11-01

    Measurements of power frequency electric and magnetic fields from alternating current power lines are carried out in order to evaluate the exposure levels of the human body on the general public. For any electromagnetic field measurement, it is necessary to define the sources of measurement uncertainty and determine the total measurement uncertainty. This paper is concerned with the problems of measurement uncertainty estimation, as the measurement uncertainty budget calculation techniques recommended in standardising documents and research studies are barely described. In this work the total uncertainty of power frequency field measurements near power lines in various measurement sites is assessed by considering not only all available equipment data, but also contributions that depend on the measurement procedures, environmental conditions and characteristics of the field source, which are considered to increase the error of measurement. A detailed application example for power frequency field measurements is presented here by accredited laboratory.

  8. Investment in flood protection measures under climate change uncertainty. An investment decision

    Energy Technology Data Exchange (ETDEWEB)

    Bruin, Karianne de

    2012-11-01

    Recent river flooding in Europe has triggered debates among scientists and policymakers on future projections of flood frequency and the need for adaptive investments, such as flood protection measures. Because there exists uncertainty about the impact of climate change of flood risk, such investments require a careful analysis of expected benefits and costs. The objective of this paper is to show how climate change uncertainty affects the decision to invest in flood protection measures. We develop a model that simulates optimal decision making in flood protection, it incorporates flexible timing of investment decisions and scientific uncertainty on the extent of climate change impacts. This model allows decision-makers to cope with the uncertain impacts of climate change on the frequency and damage of river flood events and minimises the risk of under- or over-investment. One of the innovative elements is that we explicitly distinguish between structural and non-structural flood protection measures. Our results show that the optimal investment decision today depends strongly on the cost structure of the adaptation measures and the discount rate, especially the ratio of fixed and weighted annual costs of the measures. A higher level of annual flood damage and later resolution of uncertainty in time increases the optimal investment. Furthermore, the optimal investment decision today is influenced by the possibility of the decision-maker to adjust his decision at a future moment in time.(auth)

  9. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  10. A method to analyse measurement invariance under uncertainty in between-subjects design.

    Science.gov (United States)

    Martínez, José A; Ruiz Marin, Manuel; Vivo Molina, Maria del Carmen

    2012-11-01

    In this research we have introduced a new test (H-test) for analyzing scale invariance in between group designs, and considering uncertainty in individual responses, in order to study the adequacy of disparate rating and visual scales for measuring abstract concepts. The H-test is easy to compute and, as a nonparametric test, does not require any a priori distribution of the data nor conditions on the variances of the distributions to be tested. We apply this test to measure perceived service quality of consumers of a sports services. Results show that, without considering uncertainty, the 1-7 scale is invariant, in line with the related works regarding this topic. However, de 1-5 scale and the 1-7 scale are invariant when adding uncertainty to the analysis. Therefore, adding uncertainty importantly change the conclusions regarding invariance analysis. Both types of visual scales are not invariant in the uncertainty scenario. Implications for the use of rating scales are discussed.

  11. A study on evaluation strategies in dimensional X-ray computed tomography by estimation of measurement uncertainties

    DEFF Research Database (Denmark)

    Müller, Pavel; Hiller, Jochen; Cantatore, Angela

    2012-01-01

    measurement results using different measuring strategies applied in different inspection software packages for volume and surface data analysis. The strategy influence is determined by calculating the measurement uncertainty. This investigation includes measurements of two industrial items, an aluminium pipe...

  12. Evaluation of the measurement uncertainty in automated long-term sampling of PCDD/PCDFs.

    Science.gov (United States)

    Vicaretti, M; D'Emilia, G; Mosca, S; Guerriero, E; Rotatori, M

    2013-12-01

    Since the publication of the first version of European standard EN-1948 in 1996, long-term sampling equipment has been improved to a high standard for the sampling and analysis of polychlorodibenzo-p-dioxin (PCDD)/polychlorodibenzofuran (PCDF) emissions from industrial sources. The current automated PCDD/PCDF sampling systems enable to extend the measurement time from 6-8 h to 15-30 days in order to have data values better representative of the real pollutant emission of the plant in the long period. EN-1948:2006 is still the European technical reference standard for the determination of PCDD/PCDF from stationary source emissions. In this paper, a methodology to estimate the measurement uncertainty of long-term automated sampling is presented. The methodology has been tested on a set of high concentration sampling data resulting from a specific experience; it is proposed with the intent that it is to be applied on further similar studies and generalized. A comparison between short-term sampling data resulting from manual and automated parallel measurements has been considered also in order to verify the feasibility and usefulness of automated systems and to establish correlations between results of the two methods to use a manual method for calibration of automatic long-term one. The uncertainty components of the manual method are analyzed, following the requirements of EN-1948-3:2006, allowing to have a preliminary evaluation of the corresponding uncertainty components of the automated system. Then, a comparison between experimental data coming from parallel sampling campaigns carried out in short- and long-term sampling periods is realized. Long-term sampling is more reliable to monitor PCDD/PCDF emissions than occasional short-term sampling. Automated sampling systems can assure very useful emission data both in short and long sampling periods. Despite this, due to the different application of the long-term sampling systems, the automated results could not be

  13. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    Science.gov (United States)

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do.

  14. Measuring Cross-Section and Estimating Uncertainties with the fissionTPC

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-30

    The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.

  15. Measurement Uncertainty in Racial and Ethnic Identification among Adolescents of Mixed Ancestry: A Latent Variable Approach

    Science.gov (United States)

    Tracy, Allison J.; Erkut, Sumru; Porche, Michelle V.; Kim, Jo; Charmaraman, Linda; Grossman, Jennifer M.; Ceder, Ineke; Garcia, Heidie Vazquez

    2010-01-01

    In this article, we operationalize identification of mixed racial and ethnic ancestry among adolescents as a latent variable to (a) account for measurement uncertainty, and (b) compare alternative wording formats for racial and ethnic self-categorization in surveys. Two latent variable models were fit to multiple mixed-ancestry indicator data from…

  16. Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.

    Science.gov (United States)

    Tang, Jaw-Luen; Wang, Jian-Neng

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.

  17. Use of Total Possibilistic Uncertainty as a Measure of Students' Modelling Capacities

    Science.gov (United States)

    Voskoglou, Michael Gr.

    2010-01-01

    We represent the main stages of the process of mathematical modelling as fuzzy sets in the set of the linguistic labels of negligible, low intermediate, high and complete success by students in each of these stages and we use the total possibilistic uncertainty as a measure of students' modelling capacities. A classroom experiment is also…

  18. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.

    Science.gov (United States)

    Chew, Gina; Walczyk, Thomas

    2012-03-01

    Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.

  19. Aid instability as a measure of uncertainty and the positive impact of aid on growth

    NARCIS (Netherlands)

    Lensink, R; Morrissey, O

    2000-01-01

    This article contributes to the literature on aid and economic growth. We posit that uncertainty, measured as the instability of aid receipts, will influence the relationship between aid and investment, how recipient governments respond to aid, and will capture the fact that some countries are espec

  20. Determination of uncertainty of automated emission measuring systems under field conditions using a second method as a reference

    Energy Technology Data Exchange (ETDEWEB)

    Puustinen, H.; Aunela-Tapola, L.; Tolvanen, M.; Vahlman, T. [VTT Chemical Technology, Espoo (Finland). Environmental Technology; Kovanen, K. [VTT Building Technology, Espoo (Finland). Building Physics, Building Services and Fire Technology

    1999-09-01

    This report presents a procedure to determine the uncertainty of an automated emission measuring system (AMS) by comparing the results with a second method (REF). The procedure determines the uncertainty of AMS by comparing the final concentration and emission results of AMS and REF. In this way, the data processing of the plant is included in the result evaluation. This procedure assumes that the uncertainty of REF is known and determined in due form. The uncertainty determination has been divided into two cases; varying and nearly constant concentration. The suggested procedure calculates the uncertainty of AMS at the 95 % confidence level by a tabulated t-value. A minimum of three data pairs is required. However, a higher amount of data pairs is desirable, since a low amount of data pairs results in a higher uncertainty of AMS. The uncertainty of AMS is valid only within the range of concentrations at which the tests were carried out. Statistical data processing shows that the uncertainty of the reference method has a significant effect on the uncertainty of AMS, which always becomes larger than the uncertainty of REF. This should be taken into account when testing whether AMS fulfils the given uncertainty limits. Practical details, concerning parallel measurements at the plant, and the costs of the measurement campaign, have been taken into account when suggesting alternative ways for implementing the comparative measurements. (orig.) 6 refs.

  1. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  2. Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract.

    Science.gov (United States)

    Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang

    2017-01-01

    This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I×J×K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 2(7-4) Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.

  3. Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract

    Science.gov (United States)

    Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang

    2017-01-01

    This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.

  4. Optimal entropic uncertainty relation for successive measurements in quantum information theory

    Indian Academy of Sciences (India)

    M D Srinivas

    2003-06-01

    We derive an optimal bound on the sum of entropic uncertainties of two or more observables when they are sequentially measured on the same ensemble of systems. This optimal bound is shown to be greater than or equal to the bounds derived in the literature on the sum of entropic uncertainties of two observables which are measured on distinct but identically prepared ensembles of systems. In the case of a two-dimensional Hilbert space, the optimum bound for successive measurements of two-spin components, is seen to be strictly greater than the optimal bound for the case when they are measured on distinct ensembles, except when the spin components are mutually parallel or perpendicular.

  5. LOWERING UNCERTAINTY IN CRUDE OIL MEASUREMENT BY SELECTING OPTIMIZED ENVELOPE COLOR OF A PIPELINE

    Directory of Open Access Journals (Sweden)

    Morteza Saadat

    2011-01-01

    Full Text Available Lowering uncertainty in crude oil volume measurement has been widely considered as one of main purposes in an oil export terminal. It is found that crude oil temperature at metering station has big effects on measured volume and may cause big uncertainty at the metering point. As crude oil flows through an aboveground pipeline, pick up the solar radiation and heat up. This causes the oil temperature at the metering point to rise and higher uncertainty to be created. The amount of temperature rise is depended on exterior surface paint color. In the Kharg Island, there is about 3 km distance between the oil storage tanks and the metering point. The oil flows through the pipeline due to gravity effects as storage tanks are located 60m higher than the metering point. In this study, an analytical model has been conducted for predicting oil temperature at the pipeline exit (the metering point based on climate and geographical conditions of the Kharg Island. The temperature at the metering point has been calculated and the effects of envelope color have been investigated. Further, the uncertainty in the measurement system due to temperature rise has been studied.

  6. The small sample uncertainty aspect in relation to bullwhip effect measurement

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2009-01-01

    a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect.  ...

  7. Survey of radiofrequency radiation levels around GSM base stations and evaluation of measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vulević Branislav D.

    2011-01-01

    Full Text Available This paper is a summary of broadband measurement values of radiofrequency radiation around GSM base stations in the vicinity of residential areas in Belgrade and 12 other cities in Serbia. It will be useful for determining non-ionizing radiation exposure levels of the general public in the future. The purpose of this paper is also an appropriate representation of basic information on the evaluation of measurement uncertainty.

  8. Application of the Nordtest method for "real-time" uncertainty estimation of on-line field measurement.

    Science.gov (United States)

    Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo

    2015-10-01

    Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper.

  9. Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.

    Science.gov (United States)

    Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud

    2015-04-01

    Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)

  10. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  11. Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Newsom, Rob [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-01

    In March and April of 2015, the ARM Doppler lidar that was formerly operated at the Tropical Western Pacific site in Darwin, Australia (S/N 0710-08) was deployed to the Boulder Atmospheric Observatory (BAO) for the eXperimental Planetary boundary-layer Instrument Assessment (XPIA) field campaign. The goal of the XPIA field campaign was to investigate methods of using multiple Doppler lidars to obtain high-resolution three-dimensional measurements of winds and turbulence in the atmospheric boundary layer, and to characterize the uncertainties in these measurements. The ARM Doppler lidar was one of many Doppler lidar systems that participated in this study. During XPIA the 300-m tower at the BAO site was instrumented with well-calibrated sonic anemometers at six levels. These sonic anemometers provided highly accurate reference measurements against which the lidars could be compared. Thus, the deployment of the ARM Doppler lidar during XPIA offered a rare opportunity for the ARM program to characterize the uncertainties in their lidar wind measurements. Results of the lidar-tower comparison indicate that the lidar wind speed measurements are essentially unbiased (~1cm s-1), with a random error of approximately 50 cm s-1. Two methods of uncertainty estimation were tested. The first method was found to produce uncertainties that were too low. The second method produced estimates that were more accurate and better indicators of data quality. As of December 2015, the first method is being used by the ARM Doppler lidar wind value-added product (VAP). One outcome of this work will be to update this VAP to use the second method for uncertainty estimation.

  12. Measurement models for passive dosemeters in view of uncertainty evaluation using the Monte Carlo method.

    Science.gov (United States)

    van Dijk, J W E

    2014-12-01

    Two measurement models for passive dosemeters such as thermoluminescent dosemeter, optically stimulated luminescence, radio-photoluminescence, photographic film or track etch are discussed. The first model considers the dose evaluation with the reading equipment as a single measurement, the one-stage model. The second model considers the build-up of a latent signal or latent image in the detector during exposure and the evaluation using a reader system as two separate measurements, the two-stage model. It is discussed that the two-stage model better reflects the cause and effect relations and the course of events in the daily practice of a routine dosimetry service. The one-stage model will be non-linear in crucial input quantities which can give rise to erroneous behavior of the uncertainty evaluation based on the law of propagation of uncertainty. Input quantities that show an asymmetric probability distributions propagate through the one-stage model in a physically not relevant way.

  13. Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements

    CERN Document Server

    McDonnell, J D; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-01-01

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models; to estimate model errors and thereby improve predictive capability; to extrapolate beyond the regions reached by experiment; and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, w...

  14. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    Directory of Open Access Journals (Sweden)

    R. Raj

    2015-08-01

    Full Text Available Gross primary production (GPP, separated from flux tower measurements of net ecosystem exchange (NEE of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  15. Anti-Measurement Matrix Uncertainty for Robust Sparse Signal Recovery with The Mixed l2 and l1 Norm Constraint

    CERN Document Server

    Liu, Yipeng

    2010-01-01

    Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An anti-uncertainty constraint in the form of a mixed l2 and l1 norm is deduced from the sparse signal model with MMU. Then we combine the sparse constraint with the anti-uncertainty constraint to get an anti-uncertainty sparse signal recovery operator. Numerical simulations demonstrate that the proposed operator has a better reconstructing performance with the MMU than traditional methods.

  16. 历史成本计量属性修正方法研究%Research on Correction Methods of Historical Cost Measurement Attribute

    Institute of Scientific and Technical Information of China (English)

    刘群; 廖正方

    2013-01-01

    会计是一个确认、计量、记录和报告的过程,经过长期发展演进,会计确认、记录和报告的技术方法相对成熟,而会计计量由于计量环境的不确定性不断面临着挑战。会计未来面临的挑战实质上是会计计量属性面临的挑战。我国现行会计准则在历史成本计量属性的基础上引进其他多元计量属性,并增加资产减值会计核算,根本目的在于对计量对象的历史成本信息进行修正,以提高会计信息的相关性,提升会计信息作为公共产品的社会价值。%Accounting is a confirmation, measurement, record and report on the process, after a long-term development, technology method of accounting confirmation, recording and reporting have reached relatively mature, while the accounting measurement due to the measurement object environmental uncertainties continue to face challenges. Therefore, future chal-lenge is essentially the challenges facing accounting measurement attribute. China's current accounting standards introduce oth-er multiple measurement attributes on the basis of the original single historical cost measurement attributes, and increase the asset impairment accounting, the fundamental purpose is to improve the relevance of accounting information, and improve the social values of the accounting information as a public product.

  17. Assessment of the uncertainty budget for the amperometric measurement of dissolved oxygen.

    Science.gov (United States)

    Fisicaro, Paola; Adriaens, Annemie; Ferrara, Enzo; Prenesti, Enrico

    2007-07-30

    This work aimed at identifying the main sources of uncertainty for the measurement of dissolved oxygen concentration in aqueous solutions. The experimental apparatus consists of an amperometric cell based on the Clark-type sensor. The corresponding uncertainty budget was assessed, this being a fundamental step for the validation of a measurement method. The principle of the measurement, as well as the procedure for the set-up and the characterisation of the cell, are described. The measurement equation was defined as a combination of Faraday's and Fick's laws, and a method was worked out for the empirical determination of the diffusivity parameter. In this connection, the solutions of oxygen were standardised by way of the Winkler's titration, as suggested by the ISO Guide 5813 and 5814. With this approach we aimed at contributing to the development of a potential primary method of measurement. A discussion of all the contributions to the overall uncertainty is reported, allowing operators to locate the largest ones and plan specific improvements.

  18. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    Energy Technology Data Exchange (ETDEWEB)

    McDonnell, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schunck, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Higdon, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarich, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, S. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, W. [Michigan State Univ., East Lansing, MI (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Univ. of Warsaw, Warsaw (Poland)

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  19. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    Science.gov (United States)

    Charonko, John J.; Vlachos, Pavlos P.

    2013-06-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.

  20. Integrating measuring uncertainty of tactile and optical coordinate measuring machines in the process capability assessment of micro injection moulding

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Gasparin, Stefania

    2010-01-01

    Process capability of micro injection moulding was investigated in this paper by calculating the Cp and Cpk statistics. Uncertainty of both optical and tactile measuring systems employed in the quality control of micro injection moulded products was assessed and compared with the specified...... tolerances. Limits in terms of manufacturing process capability as well as of suitability of such measuring systems when employed for micro production inspection were quantitatively determined....

  1. 电磁兼容发射测量中的不确定度%EMC Emission Measurement Uncertainty

    Institute of Scientific and Technical Information of China (English)

    孙玮

    2013-01-01

    This paper mainly analyzes the measurement uncertainty of ECM emission. Take the measurement uncertainty of mains terminal disturbance voltage as example, it introduces the purpose of measuring the emission uncertainty, the types of uncertainty source, the assessment method of uncertainty report, and the application of the Ucispr and uncertainty in the coincidence criterion.%本文着重分析电磁兼容发射测量中的不确定度。以评定电源端子骚扰电压的不确定度为例,介绍发射测量不确定度的目的、不确定度源的种类、不确定度报告的评定方法及其Ucispr和不确定度在符合性判据中的运用。

  2. Attributed graph distance measure for automatic detection of attention deficit hyperactive disordered subjects.

    Science.gov (United States)

    Dey, Soumyabrata; Rao, A Ravishankar; Shah, Mubarak

    2014-01-01

    Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.

  3. Attributed graph distance measure for automatic detection of Attention Deficit Hyperactive Disordered subjects

    Directory of Open Access Journals (Sweden)

    Soumyabrata eDey

    2014-06-01

    Full Text Available Attention Deficit Hyperactive Disorder (ADHD is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49% and test data sets (73.55%. Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.

  4. Development and application of objective uncertainty measures for nuclear power plant transient analysis[Dissertation 3897

    Energy Technology Data Exchange (ETDEWEB)

    Vinai, P

    2007-10-15

    For the development, design and licensing of a nuclear power plant (NPP), a sound safety analysis is necessary to study the diverse physical phenomena involved in the system behaviour under operational and transient conditions. Such studies are based on detailed computer simulations. With the progresses achieved in computer technology and the greater availability of experimental and plant data, the use of best estimate codes for safety evaluations has gained increasing acceptance. The application of best estimate safety analysis has raised new problems that need to be addressed: it has become more crucial to assess as to how reliable code predictions are, especially when they need to be compared against safety limits that must not be crossed. It becomes necessary to identify and quantify the various possible sources of uncertainty that affect the reliability of the results. Currently, such uncertainty evaluations are generally based on experts' opinion. In the present research, a novel methodology based on a non-parametric statistical approach has been developed for objective quantification of best-estimate code uncertainties related to the physical models used in the code. The basis is an evaluation of the accuracy of a given physical model achieved by comparing its predictions with experimental data from an appropriate set of separate-effect tests. The differences between measurements and predictions can be considered stochastically distributed, and thus a statistical approach can be employed. The first step was the development of a procedure for investigating the dependence of a given physical model's accuracy on the experimental conditions. Each separate-effect test effectively provides a random sample of discrepancies between measurements and predictions, corresponding to a location in the state space defined by a certain number of independent system variables. As a consequence, the samples of 'errors', achieved from analysis of the entire

  5. Study of laser megajoule calorimeter's thermal behaviour for energy measurement uncertainty optimisation.

    Science.gov (United States)

    Crespy, Charles; Villate, Denis; Lobios, Olivier

    2013-01-01

    For laser megajoule (LMJ) facility, an accurate procedure for laser pulse energy measurement is a crucial requirement. In this study, the influence of measurement procedure on LMJ calorimeter uncertainty is experimentally and numerically investigated. To this end, a 3D thermal model is developed and two experimental techniques are implemented. The metrological characteristics of both techniques are presented. As a first step, the model is validated by comparing numerical and experimental results. Then, the influence of a large number of parameters considered as likely uncertainty sources on calorimeter response is investigated: wavelength, pulse duration, ambient temperature, laser beam diameter.... The post processing technique procedure is also examined. The paper provides some of the parameters required to allow a robust and efficient calibration procedure to be produced.

  6. Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo;

    2016-01-01

    This paper evaluates and quantifies the repeatability of post-processing settings, such as surface determination, data fitting, and the definition of the datum system, on the uncertainties of Computed Tomography (CT) measurements. The influence of post-processing contributions was determined...... by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....

  7. Bayesian Mass Estimates of the Milky Way: Including measurement uncertainties with hierarchical Bayes

    CERN Document Server

    Eadie, Gwendolyn; Harris, William

    2016-01-01

    We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie, Harris, & Widrow (2015) and Eadie & Harris (2016) and builds upon the preliminary reports by Eadie et al (2015a,c). The method uses a distribution function $f(\\mathcal{E},L)$ to model the galaxy and kinematic data from satellite objects such as globular clusters to trace the Galaxy's gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie & Harris (2016), and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and in...

  8. Uncertainty of angular displacement measurement with a MEMS gyroscope integrated in a smartphone

    Science.gov (United States)

    de Campos Porath, Maurício; Dolci, Ricardo

    2015-10-01

    Low-cost inertial sensors have recently gained popularity and are now widely used in electronic devices such as smartphones and tablets. In this paper we present the results of a set of experiments aiming to assess the angular displacement measurement errors of a gyroscope integrated in a smartphone of a recent model. The goal is to verify whether these sensors could substitute dedicated electronic inclinometers for the measurement of angular displacement. We estimated a maximum error of 0.3° (sum of expanded uncertainty and maximum absolute bias) for the roll and pitch axes, for a measurement time without referencing up to 1 h.

  9. A revised uncertainty budget for measuring the Boltzmann constant using the Doppler Broadening Technique on ammonia

    CERN Document Server

    Lemarchand, Cyril; Sow, Papa Lat Tabara; Triki, Meriam; Tokunaga, Sean K; Briaudeau, Stephan; Chardonnet, Christian; Darquié, Benoît; Daussy, Christophe

    2013-01-01

    We report on our on-going effort to measure the Boltzmann constant, kB, using the Doppler Broadening Technique. The main systematic effects affecting the measurement are discussed. A revised error budget is presented in which the global uncertainty on systematic effects is reduced to 2.3 ppm. This corresponds to a reduction of more than one order of magnitude compared to our previous Boltzmann constant measurement. Means to reach a determination of kB at the part per million accuracy level are outlined.

  10. Measurement and interpolation uncertainties in rainfall maps from cellular communication networks

    Science.gov (United States)

    Rios Gaona, M. F.; Overeem, A.; Leijnse, H.; Uijlenhoet, R.

    2015-08-01

    Accurate measurements of rainfall are important in many hydrological and meteorological applications, for instance, flash-flood early-warning systems, hydraulic structures design, irrigation, weather forecasting, and climate modelling. Whenever possible, link networks measure and store the received power of the electromagnetic signal at regular intervals. The decrease in power can be converted to rainfall intensity, and is largely due to the attenuation by raindrops along the link paths. Such an alternative technique fulfils the continuous effort to obtain measurements of rainfall in time and space at higher resolutions, especially in places where traditional rain gauge networks are scarce or poorly maintained. Rainfall maps from microwave link networks have recently been introduced at country-wide scales. Despite their potential in rainfall estimation at high spatiotemporal resolutions, the uncertainties present in rainfall maps from link networks are not yet fully comprehended. The aim of this work is to identify and quantify the sources of uncertainty present in interpolated rainfall maps from link rainfall depths. In order to disentangle these sources of uncertainty, we classified them into two categories: (1) those associated with the individual microwave link measurements, i.e. the errors involved in link rainfall retrievals, such as wet antenna attenuation, sampling interval of measurements, wet/dry period classification, dry weather baseline attenuation, quantization of the received power, drop size distribution (DSD), and multi-path propagation; and (2) those associated with mapping, i.e. the combined effect of the interpolation methodology and the spatial density of link measurements. We computed ~ 3500 rainfall maps from real and simulated link rainfall depths for 12 days for the land surface of the Netherlands. Simulated link rainfall depths refer to path-averaged rainfall depths obtained from radar data. The ~ 3500 real and simulated rainfall maps were

  11. Representação e propagação de incertezas em dados de solo: II - Atributos numéricos Representation and propagation of soil data uncertainties: II - Numeric attributes

    Directory of Open Access Journals (Sweden)

    S. Bönisch

    2004-02-01

    Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos numéricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza por procedimentos fuzzy de álgebra de mapas. Foram estudados os atributos: teores de potássio (K e de alumínio (Al trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC e teor de areia total (AT, extraídos de 222 perfis pedológicos e de 219 amostras extras, localizados no estado de Santa Catarina. Quando os atributos foram expressos em classes de fertilidade, a incerteza de Al, S e V aumentou e a de K e CTC diminuiu, considerando intervalos de confiança de 95 % de probabilidade. Constatou-se que um maior número de dados numéricos de K, S e V levou a uma maior incerteza na inferência espacial, enquanto o maior número de dados numéricos de AT e CTC diminuiu o grau de incerteza. A incerteza diminuiu quando diferentes representações numéricas foram integradas.The objectives of this study were to use kriging indicators to generate a representation of soil properties expressed by numeric attributes, to assess the uncertainty in estimates, and to model the uncertainty propagation generated by the fuzzy procedures of map algebra. The studied attributes were exchangeable potassium (K and aluminum (Al contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, and total sand content (TST, extracted from 222 pedologic profiles and 219 extra samples, located in Santa Catarina State, Brazil. When the attributes were expressed in fertility classes, the uncertainty of Al, SB, and V increased while the uncertainty of K and CEC decreased, for intervals of confidence of 95% probability. A larger number of numeric data for K, SB, and V provided a larger uncertainty for space inference, while the uncertainty degree decreased for the largest number

  12. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    Science.gov (United States)

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  13. Uncertainty of power curve measurement with a two-beam nacelle-mounted lidar

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Courtney, Michael Stephen; Friis Pedersen, Troels

    2015-01-01

    Nacelle lidars are attractive for offshore measurements since they can provide measurements of the free wind speed in front of the turbine rotor without erecting a met mast, which significantly reduces the cost of the measurements. Nacelle-mounted pulsed lidars with two lines of sight (LOS) have...... already been demonstrated to be suitable for use in power performance measurements. To be considered as a professional tool, however, power curve measurements performed using these instruments require traceable calibrated measurements and the quantification of the wind speed measurement uncertainty. Here...... we present and demonstrate a procedure fulfilling these needs. A nacelle lidar went through a comprehensive calibration procedure. This calibration took place in two stages. First with the lidar on the ground, the tilt and roll readings of the inclinometers in the nacelle lidar were calibrated...

  14. Source attribution of climatically important aerosol properties measured at Paposo (Chile during VOCALS

    Directory of Open Access Journals (Sweden)

    D. Chand

    2010-11-01

    Full Text Available Measurements of submicron aerosol composition, light scattering, and size distribution were made from 17 October to 15 November 2008 at the elevated Paposo site (25° 0.4' S, 70° 27.01' W, 690 m a.s.l. on the Chilean coast as part of the VOCALS* Regional Experiment (REx. Based on the chemical composition measurements, a receptor modeling analysis using Positive Matrix Factorization (PMF was carried out, yielding four broad source categories of the aerosol mass, light scattering coefficient, and a proxy for cloud condensation nucleus (CCN concentration at 0.4% supersaturation derived from the size distribution measurements assuming an observed soluble mass fraction of 0.53. The sources resolved were biomass burning, marine, an urban-biofuels mix and a somewhat ambiguous mix of smelter emissions and mineral dust. The urban-biofuels mix is the most dominant aerosol mass component (52% followed by biomass burning (25%, smelter/soil dust (12% and marine (9% sources. The average (mean±std submicron aerosol mass concentration, aerosol light scattering coefficient and proxy CCN concentration were, 8.77±5.40 μg m−3, 21.9±11.0 Mm−1 and 548±210 cm−3, respectively. Sulfate is the dominant identified submicron species constituting roughly 40% of the dry mass (3.64±2.30 μg m−3, although the indentified soluble species constitute only 53% of the mass. Much of the unidentified mass is likely organic in nature. The relative importance of each aerosol source category is different depending upon whether mass, light scattering, or CCN concentration is being considered, indicating that the mean size of aerosols associated with each source are different. Marine aerosols do not appear to contribute to more than 10% to either mass, light scattering, or CCN concentration at this site. Back trajectory cluster analysis proved consistent with the PMF source attribution.

    *VOCALS: VAMOS** Ocean

  15. Source attribution of climatically important aerosol properties measured at Paposo (Chile during VOCALS

    Directory of Open Access Journals (Sweden)

    D. Chand

    2010-07-01

    Full Text Available Measurements of submicron aerosol composition, light scattering, and size distribution were made from 17 October to 15 November 2008 at the elevated Paposo site (25° 0.4' S, 70°27.01' W, 690 m a.s.l. on the Chilean coast as part of the VOCALS1 Regional Experiment (REx. Based on the chemical composition measurements, a receptor modeling analysis using Positive Matrix Factorization (PMF was carried out, yielding four broad source categories of the aerosol mass, light scattering coefficient, and a proxy for cloud condensation nucleus (CCN concentration at 0.4% supersaturation derived from the size distribution measurements assuming an observed soluble mass fraction of 0.53. The sources resolved were biomass burning, marine, an urban-biofuels mix and a somewhat ambiguous mix of smelter emissions and mineral dust. The urban-biofuels mix is the most dominant aerosol mass component (52% followed by biomass burning (25%, smelter/soil dust (12% and marine (9% sources. The average (mean±std submicron aerosol mass concentration, aerosol light scattering coefficient and proxy CCN concentration were, 8.77±5.40 μg m−3, 21.9±11.0 Mm−1 and 548±210 cm−3, respectively. Sulfate is the dominant identified submicron species constituting roughly 40% of the dry mass (3.64±2.30 μg m−3, although the indentified soluble species constitute only 53% of the mass. Much of the unidentified mass is likely organic in nature. The relative importance of each aerosol source category is different depending upon whether mass, light scattering, or CCN concentration is being considered, indicating that the mean size of aerosols associated with each source are different. Marine aerosols do not appear to contribute to more than 10% to either mass, light scattering, or CCN concentration at this site. Back trajectory cluster analysis proved consistent with the PMF source attribution.


    1 VOCALS

  16. Establishment of the measurement uncertainty of 11-nor-D9-tetrahydrocannabinol-9-carboxylic acid in hair.

    Science.gov (United States)

    Han, Eunyoung; Yang, Wonkyung; Lee, Sooyeun; Kim, Eunmi; In, Sangwhan; Choi, Hwakyung; Lee, Sangki; Chung, Heesun; Song, Joon Myong

    2011-03-20

    The quantitative analysis of 11-nor-D(9)-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in hair requires a sensitive method to detect a low-pg level. Before applying the method to real hair samples, the method was validated; in this study, we examined the uncertainty obtained from around the cut-off level of THCCOOH in hair. We calculated the measurement uncertainty (MU) of THCCOOH in hair as follows: specification of the measurand, identification of parameters using "cause and effect" diagrams, quantification of the uncertainty contributions using three factors, the uncertainty of weighing the hair sample, the uncertainty from calibrators and the calibration curve, and the uncertainty of the method precision. Finally, we calculated the degrees of freedom and the expanded uncertainty (EU). The concentration of THCCOOH in the hair sample with its EU was (0.60 ± 0.1) × 10(-4)ng/mg. The relative uncertainty percent for the measurand 0.60 × 10(-4)ng was 9.13%. In this study, we also selected different concentrations of THCCOOH in real hair samples and then calculated the EU, the relative standard uncertainty (RSU) of the concentration of THCCOOH in the test sample [u(r)(c0)], the relative uncertainty percent, and the effective degree of freedom (v(eff)). When the concentrations of THCCOOH approached the cut-off level, u(r)(c0) and the relative uncertainty percent increased but absolute EU and v(eff) decreased.

  17. Assessment of adaptation measures to high-mountain risks in Switzerland under climate uncertainties

    Science.gov (United States)

    Muccione, Veruska; Lontzek, Thomas; Huggel, Christian; Ott, Philipp; Salzmann, Nadine

    2015-04-01

    The economic evaluation of different adaptation options is important to support policy-makers that need to set priorities in the decision-making process. However, the decision-making process faces considerable uncertainties regarding current and projected climate impacts. First, physical climate and related impact systems are highly complex and not fully understood. Second, the further we look into the future, the more important the emission pathways become, with effects on the frequency and severity of climate impacts. Decision on adaptation measures taken today and in the future must be able to adequately consider the uncertainties originating from the different sources. Decisions are not taken in a vacuum but always in the context of specific social, economic, institutional and political conditions. Decision finding processes strongly depend on the socio-political system and usually have evolved over some time. Finding and taking decisions in the respective socio-political and economic context multiplies the uncertainty challenge. Our presumption is that a sound assessment of the different adaptation options in Switzerland under uncertainty necessitates formulating and solving a dynamic, stochastic optimization problem. Economic optimization models in the field of climate change are not new. Typically, such models are applied for global-scale studies but barely for local-scale problems. In this analysis, we considered the case of the Guttannen-Grimsel Valley, situated in the Swiss Bernese Alps. The alpine community has been affected by high-magnitude, high-frequency debris flows that started in 2009 and were historically unprecendented. They were related to thaw of permafrost in the rock slopes of Ritzlihorn and repeated rock fall events that accumulated at the debris fan and formed a sediment source for debris flows and were transported downvalley. An important transit road, a trans-European gas pipeline and settlements were severely affected and partly

  18. Accounting for uncertainty in volumes of seabed change measured with repeat multibeam sonar surveys

    Science.gov (United States)

    Schimel, Alexandre C. G.; Ierodiaconou, Daniel; Hulands, Lachlan; Kennedy, David M.

    2015-12-01

    Seafloors of unconsolidated sediment are highly dynamic features; eroding or accumulating under the action of tides, waves and currents. Assessing which areas of the seafloor experienced change and measuring the corresponding volumes involved provide insights into these important active sedimentation processes. Computing the difference between Digital Elevation Models (DEMs) obtained from repeat Multibeam Echosounders (MBES) surveys has become a common technique to identify these areas, but the uncertainty in these datasets considerably affects the estimation of the volumes displaced. The two main techniques used to take into account uncertainty in volume estimations are the limitation of calculations to areas experiencing a change in depth beyond a chosen threshold, and the computation of volumetric confidence intervals. However, these techniques are still in their infancy and, as a result, are often crude, seldom used or poorly understood. In this article, we explored a number of possible methodological advances to address this issue, including: (1) using the uncertainty information provided by the MBES data processing algorithm CUBE, (2) adapting fluvial geomorphology techniques for volume calculations using spatially variable thresholds and (3) volumetric histograms. The nearshore seabed off Warrnambool harbour - located in the highly energetic southwest Victorian coast, Australia - was used as a test site. Four consecutive MBES surveys were carried out over a four-months period. The difference between consecutive DEMs revealed an area near the beach experiencing large sediment transfers - mostly erosion - and an area of reef experiencing increasing deposition from the advance of a nearby sediment sheet. The volumes of sediment displaced in these two areas were calculated using the techniques described above, both traditionally and using the suggested improvements. We compared the results and discussed the applicability of the new methodological improvements

  19. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-03-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences yet, few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from different parameter uncertainty estimation methods. The Generalized Uncertainty Likelihood Estimator (GLUE, a modified version of GLUE, and the Shuffle Complex Evolution Metropolis (SCEM are used to generate model ensembles for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of parameter uncertainty, one that is commensurate with the dimension of the ensembles themselves. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  20. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  1. A Method for Dimensional and Surface Optical Measurements Uncertainty Assessment on Micro Structured Surfaces Manufactured by Jet-ECM

    DEFF Research Database (Denmark)

    Quagliotti, Danilo; Tosello, Guido; Islam, Aminul;

    2015-01-01

    Surface texture and step height measurements of electrochemically machined cavities have been compared among optical and tactile instruments. A procedure is introduced for correcting possible divergences among the instruments and, ultimately, for evaluating the measurement uncertainty according t...

  2. A Method for Dimensional and Surface Optical Measurements Uncertainty Assessment on Micro Structured Surfaces Manufactured by Jet-ECM

    DEFF Research Database (Denmark)

    Quagliotti, Danilo; Tosello, Guido; Islam, Aminul;

    2015-01-01

    Surface texture and step height measurements of electrochemically machined cavities have been compared among optical and tactile instruments. A procedure is introduced for correcting possible divergences among the instruments and, ultimately, for evaluating the measurement uncertainty according...

  3. Field measurement of dermal soil loading attributable to various activities: implications for exposure assessment.

    Science.gov (United States)

    Kissel, J C; Richter, K Y; Fenske, R A

    1996-02-01

    Estimates of soil adherence to skin are required for assessment of dermal exposures to contaminants in soils. Previously available estimates depend heavily on indirect measurements and/or artificial activities and reflect sampling of hands only. Results are presented here from direct measurement of soil loading on skin surfaces of volunteers before and after normal occupational and recreational activities that might reasonably be expected to lead to soil contact. Skin surfaces assayed included hands, forearms, lower legs, faces and/or feet. Observed hand loadings vary over five orders of magnitude (roughly from 10(-3) to 10(2) mg/cm2) and are dependent upon type of activity. Hand loadings within the current default range of 0.2 to 1.0 mg/cm2 were produced by activities providing opportunity for relatively vigorous soil contact (rugby, farming). Loadings less than 0.2 mg/cm2 were found on hands following activities presenting less opportunity for direct soil contact (soccer, professional grounds maintenance) and on other body parts under many conditions. The default range does not, however, represent a worst case. Children playing in mud on the shore of a lake generated geometric mean loadings well in excess of 1 mg/cm2 on hands, arms, legs, and feet. Post-activity average loadings on hands were typically higher than average loadings on other body parts resulting from the same activity. Hand data from limited activities cannot, however, be used to conservatively predict loadings that might occur on other body surfaces without regard to activity since non-hand loadings attributable to higher contact activities exceeded hand loadings resulting from lower contact activities. Differences between pre- and post-activity loadings also demonstrate that dermal contact with soil is episodic. Typical background (pre-activity) geometric mean loadings appear to be on the order of 10(-2) mg/cm2 or less. Because exposures are activity dependent, quantification of dermal exposure

  4. Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances

    Science.gov (United States)

    Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng

    2016-04-01

    Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.

  5. Monte Carlo method for calculating oxygen abundances and their uncertainties from strong-line flux measurements

    Science.gov (United States)

    Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.

    2016-07-01

    We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z github.com/nyusngroup/pyMCZ.

  6. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements

    Science.gov (United States)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  7. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.

    Science.gov (United States)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  8. Quantification of uncertainty of experimental measurement in leaching test on cement-based materials.

    Science.gov (United States)

    Coutand, M; Cyr, M; Clastres, P

    2011-10-01

    When mineral wastes are reused in construction materials, a current practice is to evaluate their environmental impact using standard leaching test. However, due to the uncertainty of the measurement, it is usually quite difficult to estimate the pollutant potential compared to other materials or threshold limits. The aim of this paper is to give a quantitative evaluation of the uncertainty of leachate concentrations of cement-based materials, as a function of the number of test performed. The relative standard deviations and relative confidence intervals are determined using experimental data in order to give a global evaluation of the uncertainty of leachate concentrations (determination of total relative standard deviation). Various combinations were realized in order to point out the origin of large dispersion of the results (determination of relative standard deviation linked to analytical measured and to leaching procedure), generalisation was suggested and the results were compared to literature. An actual example was given about the introduction of residue (meat and bone meal bottom ash--MBM-BA) in mortar, leaching tests were carried out on various samples with and without residue MBM-BA. In conclusion large dispersion were observed and mainly due to heterogeneity of materials. So heightened attention needed to analyse leaching result on cement-based materials and further more other tests (e.g. ecotoxicology) should be performed to evaluate the environmental effect of these materials.

  9. Robust framework for PET image reconstruction incorporating system and measurement uncertainties.

    Directory of Open Access Journals (Sweden)

    Huafeng Liu

    Full Text Available In Positron Emission Tomography (PET, an optimal estimate of the radioactivity concentration is obtained from the measured emission data under certain criteria. So far, all the well-known statistical reconstruction algorithms require exactly known system probability matrix a priori, and the quality of such system model largely determines the quality of the reconstructed images. In this paper, we propose an algorithm for PET image reconstruction for the real world case where the PET system model is subject to uncertainties. The method counts PET reconstruction as a regularization problem and the image estimation is achieved by means of an uncertainty weighted least squares framework. The performance of our work is evaluated with the Shepp-Logan simulated and real phantom data, which demonstrates significant improvements in image quality over the least squares reconstruction efforts.

  10. The GUM revision: the Bayesian view toward the expression of measurement uncertainty

    Science.gov (United States)

    Lira, I.

    2016-03-01

    The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) has been in use for more than 20 years, serving its purposes worldwide at all levels of metrology, from scientific to industrial and commercial applications. However, the GUM presents some inconsistencies, both internally and with respect to its two later Supplements. For this reason, the Joint Committee for Guides in Metrology, which is responsible for these documents, has decided that a major revision of the GUM is needed. This will be done by following the principles of Bayesian statistics, a concise summary of which is presented in this article. Those principles should be useful in physics and engineering laboratory courses that teach the fundamentals of data analysis and measurement uncertainty evaluation.

  11. Uncertainty of diagnostic features measured by laser vibrometry: The case of optically non-cooperative surfaces

    Science.gov (United States)

    Agostinelli, G.; Paone, N.

    2012-12-01

    This paper discusses the uncertainty in the measurement of characteristic features by laser Doppler vibrometry useful to industrial diagnostics when measuring on polished, highly reflective, low diffusive surfaces, such as the enamelled metal sheet of the cabinet of electrical household appliances. This case is relevant to on-line quality control applications, where it is not possible to adopt any surface treatment to improve optical scattering properties. The paper illustrates in particular the effect of drop-out noise on the measured vibration signal and develops a joint analysis of drop-out noise due to poor optical properties and its effect on the diagnostic process, presented in statistical terms. A non-dimensional quantity is introduced to describe the amplitude of the Doppler signal and the presence of drop-out noise is shown to be correlated to its amplitude. Starting from the consideration that drop-out noise is impulsive, with a pseudo-random occurrence, this paper presents an experimental assessment of uncertainty in the measurement of some spectral features used for the diagnosis of electrical appliances on the production line. It can be seen that the effect of drop-out leads to an increase in scatter and to a systematic shift in the distribution of the features examined; this effect is relatively larger for features with low amplitude. The Monte Carlo simulation of measurement uncertainty propagation confirms the same trend and allows statistical distributions to be obtained for the features, thereby enabling us to draw some conclusions as regards diagnostic errors. This study shows that in the presence of pseudo-random drop-out noise a diagnosis based on spectral features with low amplitude has poor reliability and false-positives are highly probable. An analysis of this occurrence is made for cases of production exhibiting features with different statistical distributions and possible actions to limit such problem are highlighted.

  12. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  13. Comparison of ISO-GUM and Monte Carlo methods for the evaluation of measurement uncertainty: application to direct cadmium measurement in water by GFAAS.

    Science.gov (United States)

    Theodorou, Dimitrios; Meligotsidou, Loukia; Karavoltsos, Sotirios; Burnetas, Apostolos; Dassenakis, Manos; Scoullos, Michael

    2011-02-15

    The propagation stage of uncertainty evaluation, known as the propagation of distributions, is in most cases approached by the GUM (Guide to the Expression of Uncertainty in Measurement) uncertainty framework which is based on the law of propagation of uncertainty assigned to various input quantities and the characterization of the measurand (output quantity) by a Gaussian or a t-distribution. Recently, a Supplement to the ISO-GUM was prepared by the JCGM (Joint Committee for Guides in Metrology). This Guide gives guidance on propagating probability distributions assigned to various input quantities through a numerical simulation (Monte Carlo Method) and determining a probability distribution for the measurand. In the present work the two approaches were used to estimate the uncertainty of the direct determination of cadmium in water by graphite furnace atomic absorption spectrometry (GFAAS). The expanded uncertainty results (at 95% confidence levels) obtained with the GUM Uncertainty Framework and the Monte Carlo Method at the concentration level of 3.01 μg/L were ±0.20 μg/L and ±0.18 μg/L, respectively. Thus, the GUM Uncertainty Framework slightly overestimates the overall uncertainty by 10%. Even after taking into account additional sources of uncertainty that the GUM Uncertainty Framework considers as negligible, the Monte Carlo gives again the same uncertainty result (±0.18 μg/L). The main source of this difference is the approximation used by the GUM Uncertainty Framework in estimating the standard uncertainty of the calibration curve produced by least squares regression. Although the GUM Uncertainty Framework proves to be adequate in this particular case, generally the Monte Carlo Method has features that avoid the assumptions and the limitations of the GUM Uncertainty Framework.

  14. Evaluation of the uncertainties caused by the forward scattering in turbidity measurement of the coagulation rate.

    Science.gov (United States)

    Xu, Shenghua; Sun, Zhiwei

    2010-05-18

    The forward scattering light (FSL) received by the detector can cause uncertainties in turbidity measurement of the coagulation rate of colloidal dispersion, and this effect becomes more significant for large particles. In this study, the effect of FSL is investigated on the basis of calculations using the T-matrix method, an exact technique for the computation of nonspherical scattering. The theoretical formulation and relevant numerical implementation for predicting the contribution of FSL in the turbidity measurement is presented. To quantitatively estimate the degree of the influence of FSL, an influence ratio comparing the contribution of FSL to the pure transmitted light in the turbidity measurement is introduced. The influence ratios evaluated under various parametric conditions and the relevant analyses provide a guideline for properly choosing particle size, measuring wavelength to minimize the effect of FSL in turbidity measurement of coagulation rate.

  15. Final report on uncertainties in the detection, measurement, and analysis of selected features pertinent to deep geologic repositories

    Energy Technology Data Exchange (ETDEWEB)

    1978-07-10

    Uncertainties with regard to many facets of repository site characterization have not yet been quantified. This report summarizes the state of knowledge of uncertainties in the measurement of porosity, hydraulic conductivity, and hydraulic gradient; uncertainties associated with various geophysical field techniques; and uncertainties associated with the effects of exploration and exploitation activities in bedded salt basins. The potential for seepage through a depository in bedded salt or shale is reviewed and, based upon the available data, generic values for the hydraulic conductivity and porosity of bedded salt and shale are proposed.

  16. Measuring Student Graduateness: Reliability and Construct Validity of the Graduate Skills and Attributes Scale

    Science.gov (United States)

    Coetzee, Melinde

    2014-01-01

    This study reports the development and validation of the Graduate Skills and Attributes Scale which was initially administered to a random sample of 272 third-year-level and postgraduate-level, distance-learning higher education students. The data were analysed using exploratory factor analysis. In a second study, the scale was administered to a…

  17. MM98.52 - An industrial comparison of coordinate measuring machines in Scandinavia with focus on uncertainty statements

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Chiffre, Leonardo De

    1999-01-01

    This paper describes an industrial comparison of coordinate measuring machines (CMMs) carried out in the Scandinavian countries from October 1994 to May 1996. Fifty-nine industrial companies with a total of 62 CMMs participated in the project and measured a comparison package with five items chosen...... results for the majority of the participants; whereas, increasing the level of difficulty from simple length measurements to more complex geometrical quantities gave severe problems for some of the companies. This occurred even though the participants measured according to prescribed procedures....... An important part of the intercomparison was to test the ability of the participants to determine measurement uncertainties. One of the uncertainties was based upon a "best guess" but nevertheless, many participants did not even report this uncertainty. Uncertainty budgeting was not used for measurements other...

  18. Influence of measurement uncertainty on classification of thermal environment in buildings according to European Standard EN 15251

    DEFF Research Database (Denmark)

    Kolarik, Jakub; Olesen, Bjarne W.

    2015-01-01

    European Standard EN 15 251 in its current version does not provide any guidance on how to handle uncertainty of long term measurements of indoor environmental parameters used for classification of buildings. The objective of the study was to analyse the uncertainty for field measurements...... measurements of operative temperature at two measuring points (south/south-west and north/northeast orientation). Results of the present study suggest that measurement uncertainty needs to be considered during assessment of thermal environment in existing buildings. When expanded standard uncertainty was taken...... of operative temperature and evaluate its effect on categorization of thermal environment according to EN 15251. A data-set of field measurements of operative temperature four office buildings situated in Denmark, Italy and Spain was used. Data for each building included approx. one year of continuous...

  19. Research on the attribution evaluating methods of dynamic effects of various parameter uncertainties on the in-structure floor response spectra of nuclear power plant

    Science.gov (United States)

    Li, Jianbo; Lin, Gao; Liu, Jun; Li, Zhiyuan

    2017-01-01

    Consideration of the dynamic effects of the site and structural parameter uncertainty is required by the standards for nuclear power plants (NPPs) in most countries. The anti-seismic standards provide two basic methods to analyze parameter uncertainty. Directly manually dealing with the calculated floor response spectra (FRS) values of deterministic approaches is the first method. The second method is to perform probability statistical analysis of the FRS results on the basis of the Monte Carlo method. The two methods can only reflect the overall effects of the uncertain parameters, and the results cannot be screened for a certain parameter's influence and contribution. In this study, based on the dynamic analyses of the floor response spectra of NPPs, a comprehensive index of the assessed impact for various uncertain parameters is presented and recommended, including the correlation coefficient, the regression slope coefficient and Tornado swing. To compensate for the lack of guidance in the NPP seismic standards, the proposed method can effectively be used to evaluate the contributions of various parameters from the aspects of sensitivity, acuity and statistical swing correlations. Finally, examples are provided to verify the set of indicators from systematic and intuitive perspectives, such as the uncertainty of the impact of the structure parameters and the contribution to the FRS of NPPs. The index is sensitive to different types of parameters, which provides a new technique for evaluating the anti-seismic parameters required for NPPs.

  20. Research into Uncertainty in Measurement of Seawater Chemical Oxygen Demand by Potassium Iodide-Alkaline Potassium Permanganate Determination Method.

    OpenAIRE

    Zhang, Shiqiang; Guo, Changsong

    2007-01-01

    Using the glucose and L-glutamic-acid to prepare the standard substance according to the ratio of 1:1, and the artificial seawater and the standard substance to prepare a series of standard solutions, the distribution pattern of uncertainty in measurement of seawater COD is obtained based on the measured results of the series of standard solutions by the potassium iodide-alkaline potassium permanganate determination method. The distribution pattern is as follows: Uncertainty in measurement is...

  1. Validation of the Consumer Values versus Perceived Product Attributes Model Measuring the Purchase of Athletic Team Merchandise

    Science.gov (United States)

    Lee, Donghun; Byon, Kevin K.; Schoenstedt, Linda; Johns, Gary; Bussell, Leigh Ann; Choi, Hwansuk

    2012-01-01

    Various consumer values and perceived product attributes trigger consumptive behaviors of athletic team merchandise (Lee, Trail, Kwon, & Anderson, 2011). Likewise, using a principal component analysis technique on a student sample, a measurement scale was proposed that consisted of nine factors affecting the purchase of athletic team…

  2. Impact of atmospheric state uncertainties on retrieved XCO2 columns from laser differential absorption spectroscopy measurements

    Science.gov (United States)

    Zaccheo, T. Scott; Pernini, Timothy; Snell, Hilary E.; Browell, Edward V.

    2014-01-01

    This work assesses the impact of uncertainties in atmospheric state knowledge on retrievals of carbon dioxide column amounts (XCO2) from laser differential absorption spectroscopy (LAS) measurements. LAS estimates of XCO2 columns are normally derived not only from differential absorption observations but also from measured or prior knowledge of atmospheric state that includes temperature, moisture, and pressure along the viewing path. In the case of global space-based monitoring systems, it is often difficult if not impossible to provide collocated in situ measurements of atmospheric state for all observations, so retrievals often rely on collocated remote-sensed data or values derived from numerical weather prediction (NWP) models to describe the atmospheric state. A radiative transfer-based simulation framework, combined with representative global upper-air observations and matched NWP profiles, was used to assess the impact of model differences on estimates of column CO2 and O2 concentrations. These analyses focus on characterizing these errors for LAS measurements of CO2 in the 1.57-μm region and of O2 in the 1.27-μm region. The results provide a set of signal-to-noise metrics that characterize the errors in retrieved values associated with uncertainties in atmospheric state and provide a method for selecting optimal differential absorption line pairs to minimize the impact of these noise terms.

  3. A First Look at the Impact of NNNLO Theory Uncertainties on Top Mass Measurements at the ILC

    CERN Document Server

    Simon, Frank

    2016-01-01

    A scan of the top production threshold at a future electron-positron collider provides the possibility for a precise measurement of the top quark mass in theoretically well-defined mass schemes. With statistical uncertainties of 20 MeV or below, systematics will likely dominate the total uncertainty of the measurement. This contribution presents a first look at the impact of the renormalization scale uncertainties in recent NNNLO calculations of the top pair production cross section in the threshold region on the measurement of the top quark mass at the International Linear Collider.

  4. Solar Irradiances Measured using SPN1 Radiometers: Uncertainties and Clues for Development

    Energy Technology Data Exchange (ETDEWEB)

    Badosa, Jordi; Wood, John; Blanc, Philippe; Long, Charles N.; Vuilleumier, Laurent; Demengel, Dominique; Haeffelin, Martial

    2014-12-08

    The fast development of solar radiation and energy applications, such as photovoltaic and solar thermodynamic systems, has increased the need for solar radiation measurement and monitoring, not only for the global component but also the diffuse and direct. End users look for the best compromise between getting close to state-of-the-art measurements and keeping capital, maintenance and operating costs to a minimum. Among the existing commercial options, SPN1 is a relatively low cost solar radiometer that estimates global and diffuse solar irradiances from seven thermopile sensors under a shading mask and without moving parts. This work presents a comprehensive study of SPN1 accuracy and sources of uncertainty, which results from laboratory experiments, numerical modeling and comparison studies between measurements from this sensor and state-of-the art instruments for six diverse sites. Several clues are provided for improving the SPN1 accuracy and agreement with state-of-the-art measurements.

  5. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  6. Solar irradiances measured using SPN1 radiometers: uncertainties and clues for development

    Directory of Open Access Journals (Sweden)

    J. Badosa

    2014-08-01

    Full Text Available The fast development of solar radiation and energy applications, such as photovoltaic and solar thermodynamic systems, has increased the need for solar radiation measurement and monitoring, not only for the global component but also for the diffuse and direct. End users look for the best compromise between getting close to state-of-the-art measurements and keeping low capital, maintenance and operating costs. Among the existing commercial options, SPN1 is a relatively low cost solar radiometer that estimates global and diffuse solar irradiances from seven thermopile sensors under a shading mask and without moving parts. This work presents a comprehensive study of SPN1 accuracy and sources of uncertainty, which results from laboratory experiments, numerical modeling and comparison studies between measurements from this sensor and state-of-the art instruments for six diverse sites. Several clues are provided for improving the SPN1 accuracy and agreement with state-of-the art measurements.

  7. Study of uncertainties of height measurements of monoatomic steps on Si 5 × 5 using DFT

    Science.gov (United States)

    Charvátová Campbell, Anna; Jelínek, Pavel; Klapetek, Petr

    2017-03-01

    The development of nanotechnology gives rise to new demands on standards for dimensional measurements. Monoatomic steps on, e.g. silicon are a suitable length standard with a very low nominal value. The quantum-mechanical nature of objects consisting of only a few atomic layers in one or more dimensions can no longer be neglected and it is necessary to make a transition from the classical picture to a quantum approach in the field of uncertainty analysis. In this contribution, sources of uncertainty for height measurements using atomic force microscopy (AFM) in contact mode are discussed. Results of density functional theory (DFT) modeling of AFM scans on a monoatomic step on silicon 5× 5 are presented. Van der Waals forces for the interaction of a spherical tip and an infinite step are calculated classically. Height measurements in constant force mode at different forces are simulated. In our approach, we model the tip apex and the monoatomic step as systems of individual atoms. As interatomic forces act on the sample and the tip of the microscope, the atoms of both relax in order to reach equilibrium positions. This leads to changes in those quantities that are finally interpreted as the resultant height of the step. The presence of van der Waals forces induces differences between the forces acting on atoms at different distances of the step. The behavior of different tips is studied along with their impact on the resulting AFM scans. Because the shape of the tip apex is usually unknown in real experiments, this variance in the height result due to different tips is interpreted as a source of uncertainty.

  8. Measuring Young's Modulus the Easy Way, and Tracing the Effects of Measurement Uncertainties

    Science.gov (United States)

    Nunn, John

    2015-01-01

    The speed of sound in a solid is determined by the density and elasticity of the material. Young's modulus can therefore be calculated once the density and the speed of sound in the solid are measured. The density can be measured relatively easily, and the speed of sound through a rod can be measured very inexpensively by setting up a longitudinal…

  9. Environmental Uncertainty, Performance Measure Variety and Perceived Performance in Icelandic Companies

    DEFF Research Database (Denmark)

    Rikhardsson, Pall; Sigurjonsson, Throstur Olaf; Arnardottir, Audur Arna

    The use of performance measures and performance measurement frameworks has increased significantly in recent years. The type and variety of performance measures in use has been researched in various countries and linked to different variables such as the external environment, performance measurem......The use of performance measures and performance measurement frameworks has increased significantly in recent years. The type and variety of performance measures in use has been researched in various countries and linked to different variables such as the external environment, performance...... measurement frameworks, and management characteristics. This paper reports the results of a study carried out at year end 2013 of the use of performance measures by Icelandic companies and the links to perceived environmental uncertainty, management satisfaction with the performance measurement system...... and the perceived performance of the company. The sample was the 300 largest companies in Iceland and the response rate was 27%. Compared to other studies the majority of the respondents use a surprisingly high number of different measures – both financial and non-financial. This made testing of the three...

  10. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  11. Weak Anderson localisation in reverberation rooms and its effect on the uncertainty of sound power measurements

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    2011-01-01

    The effect known as ‘weak Anderson localisation’, ‘coherent backscattering’ or ‘enhanced backscattering’ is a physical phenomenon that occurs in random systems, e.g., disordered media and linear wave systems, including reverberation rooms: the mean square response is increased at the drive point....... In a reverberation room this means that one can expect an increase of the reverberant sound field at the position of the source that generates the sound field. This affects the sound power output of the source and is therefore of practical concern. However, because of the stronger direct sound field at the source...... for the uncertainty of sound power measurements....

  12. Determination of Al in cake mix: Method validation and estimation of measurement uncertainty

    Science.gov (United States)

    Andrade, G.; Rocha, O.; Junqueira, R.

    2016-07-01

    An analytical method for the determination of Al in cake mix was developed. Acceptable values were obtained for the following parameters: linearity, detection limit - LOD (5.00 mg-kg-1) quantification limit - LOQ (12.5 mg-kg-1), the recovery assays values (between 91 and 102%), the relative standard deviation under repeatability and within-reproducibility conditions (<20.0%) and measurement uncertainty tests (<10.0%) The results of the validation process showed that the proposed method is fitness for purpose.

  13. Measurement of sub-picoampere direct currents with uncertainties below ten attoamperes

    Science.gov (United States)

    Krause, C.; Drung, D.; Scherer, H.

    2017-02-01

    A new type of the ultrastable low-noise current amplifier (ULCA) is presented. It involves thick-film resistors to achieve a high feedback resistance of 185 GΩ at the input amplifier. An improved noise level of 0.4 fA/√{Hz} with a 1/f corner of about 30 μHz and an effective input bias current well below 100 aA are demonstrated. For small direct currents, measurement uncertainties below 10 aA are achievable even without current reversal or on/off switching. Above about 1 pA, the stability of the ULCA's resistor network limits the relative measurement uncertainty to about 10 parts per million. The new setup is used to characterize and optimize the noise in the wiring installed on a dilution refrigerator for current measurements on single-electron transport pumps. In a test configuration connected to the wiring in a pulse tube refrigerator, a total noise floor of 0.44 fA/√{Hz} was achieved including the contributions of amplifier and cryogenic wiring.

  14. Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks.

    Science.gov (United States)

    Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng

    2016-05-30

    Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity.

  15. [Validation of measurement methods and estimation of uncertainty of measurement of chemical agents in the air at workstations].

    Science.gov (United States)

    Dobecki, Marek

    2012-01-01

    This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.

  16. Constraining uncertainty in the prediction of pollutant transport in rivers allowing for measurement error.

    Science.gov (United States)

    Smith, P.; Beven, K.; Blazkova, S.; Merta, L.

    2003-04-01

    This poster outlines a methodology for the estimation of parameters in an Aggregated Dead Zone (ADZ) model of pollutant transport, by use of an example reach of the River Elbe. Both tracer and continuous water quality measurements are analysed to investigate the relationship between discharge and advective time delay. This includes a study of the effects of different error distributions being applied to the measurement of both variables using Monte-Carlo Markov Chain (MCMC) techniques. The derived relationships between discharge and advective time delay can then be incorporated into the formulation of the ADZ model to allow prediction of pollutant transport given uncertainty in the parameter values. The calibration is demonstrated in a hierarchical framework, giving the potential for the selection of appropriate model structures for the change in transport characteristics with discharge in the river. The value of different types and numbers of measurements are assessed within this framework.

  17. Bayesian Mass Estimates of the Milky Way: Including Measurement Uncertainties with Hierarchical Bayes

    Science.gov (United States)

    Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.

    2017-02-01

    We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}ȯ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}ȯ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.

  18. The role of the uncertainty of measurement of serum creatinine concentrations in the diagnosis of acute kidney injury.

    Science.gov (United States)

    Kin Tekce, Buket; Tekce, Hikmet; Aktas, Gulali; Uyeturk, Ugur

    2016-01-01

    Uncertainty of measurement is the numeric expression of the errors associated with all measurements taken in clinical laboratories. Serum creatinine concentration is the most common diagnostic marker for acute kidney injury. The goal of this study was to determine the effect of the uncertainty of measurement of serum creatinine concentrations on the diagnosis of acute kidney injury. We calculated the uncertainty of measurement of serum creatinine according to the Nordtest Guide. Retrospectively, we identified 289 patients who were evaluated for acute kidney injury. Of the total patient pool, 233 were diagnosed with acute kidney injury using the AKIN classification scheme and then were compared using statistical analysis. We determined nine probabilities of the uncertainty of measurement of serum creatinine concentrations. There was a statistically significant difference in the number of patients diagnosed with acute kidney injury when uncertainty of measurement was taken into consideration (first probability compared to the fifth p = 0.023 and first probability compared to the ninth p = 0.012). We found that the uncertainty of measurement for serum creatinine concentrations was an important factor for correctly diagnosing acute kidney injury. In addition, based on the AKIN classification scheme, minimizing the total allowable error levels for serum creatinine concentrations is necessary for the accurate diagnosis of acute kidney injury by clinicians.

  19. Intercomparison of Climate Data Sets as a Measure of Observational Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Achuta Rao, K M; Fiorino, M; Gleckler, P J; Taylor, K E; Wehner, M F

    2002-02-22

    Uncertainties in climate observations are revealed when alternate observationally based data sets are compared. General circulation model-based ''reanalyses'' of meteorological observations will yield different results from different models, even if identical sets of raw unanalyzed data form their starting points. We have examined 25 longitude-latitude fields (including selected levels for three-dimensional quantities) encompassing atmospheric climate variables for which the PCMDI observational data base contains two or more high-quality sources. For the most part we compare ECMWF with NCEP reanalysis. In some cases, we compare in situ and/or satellite-derived data with reanalysis. To obtain an overview of the differences for all 25 fields, we use a graphical technique developed for climate model diagnosis: a ''portrait diagram'' displaying root-mean-square differences between the alternate data sources. With a few exceptions (arising from the requirement that RMS differences be normalized to accommodate different units of variables) the portrait diagrams indicate areas of agreement and disagreement that can be confirmed by examining traditional graphics such as zonal mean plots. In accord with conventional wisdom, the greatest agreement between alternate data sets--hence the smallest implied observational uncertainty--occurs for upper tropospheric zonal wind. We also find fairly good agreement between reanalysis and more direct measures of precipitation, suggesting that modern observational systems are resolving some long-standing problems with its measurement.

  20. False positive and false negative radon measurement results due to uncertainties in seasonal correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Cliff, K.D.; Miles, J.C.H.; Naismith, S.P. [National Radiological Protection Board, Chilton (United Kingdom)

    1994-12-31

    Data from the UK national survey of radon in 2300 homes has been re-analysed to determine the uncertainty in seasonal correction factors applied to measurements of less than 1 year. The required correction factor for each six-month result was calculated from the known annual average for the appropriate home. The seasonal correction factors derived for each month were found to be approximately log-normally distributed, with an average geometric standard deviation of 1.36. Following this initial survey, radon measurements have been made in more than 80,000 homes in southwest England to determine whether they are above the UK radon Action Level of 2000 Bq.m{sup -3}. The measurements were carried out over three months in each case using etched track detectors in two locations in each home, and the results were corrected for the average seasonal variation found in the original UK study of radon in homes. Because of the uncertainty in the seasonal correction factors, households with between 130 and 300 Bq.m{sup -3} were advised to have a second three-month measurement in a different season before deciding whether or not to take remedial action. More than 7000 homes were remonitored for this purpose. The results are analysed to show the number of false positive and false negative results that would have been reported if advice had been based solely on the initial measurement. It is shown that the present scheme results in extremely small numbers of false positive and false negative results. (author).

  1. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  2. Changes in Handset Performance Measures due to Spherical Radiation Pattern Measurement Uncertainty

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Pedersen, Gert Frølund

    An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS......), and mean effective gain (MEG) can be computed. Often this kind of measurements are made with a phantom head next to the handsets in order to simulate the influence of a real user. The measured radiation patterns are only expected to be repeatable if the same setup is used, i.e., the same phantom...... and the same mounting of the handset on the phantom. In this work the influence of mounting errors on the TRP, TIS, and MEG is investigated. Knowledge about the error due to incorrect mounting is necessary in determining requirements for both the mounting accuracy as well as for other parts of the measurement...

  3. Quantifying uncertainty in the measurement of arsenic in suspended particulate matter by Atomic Absorption Spectrometry with hydride generator

    Directory of Open Access Journals (Sweden)

    Ahuja Tarushee

    2011-04-01

    Full Text Available Abstract Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG. In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2.

  4. CFCI3 (CFC-11): UV Absorption Spectrum Temperature Dependence Measurements and the Impact on Atmospheric Lifetime and Uncertainty

    Science.gov (United States)

    Mcgillen, Max R.; Fleming, Eric L.; Jackman, Charles H.; Burkholder, James B.

    2014-01-01

    CFCl3 (CFC-11) is both an atmospheric ozone-depleting and potent greenhouse gas that is removed primarily via stratospheric UV photolysis. Uncertainty in the temperature dependence of its UV absorption spectrum is a significant contributing factor to the overall uncertainty in its global lifetime and, thus, model calculations of stratospheric ozone recovery and climate change. In this work, the CFC-11 UV absorption spectrum was measured over a range of wavelength (184.95 - 230 nm) and temperature (216 - 296 K). We report a spectrum temperature dependence that is less than currently recommended for use in atmospheric models. The impact on its atmospheric lifetime was quantified using a 2-D model and the spectrum parameterization developed in this work. The obtained global annually averaged lifetime was 58.1 +- 0.7 years (2 sigma uncertainty due solely to the spectrum uncertainty). The lifetime is slightly reduced and the uncertainty significantly reduced from that obtained using current spectrum recommendations

  5. 浅谈我国公允价值计量属性%Discussion about the fair value measurement attribute

    Institute of Scientific and Technical Information of China (English)

    吴微

    2015-01-01

    With the rapid development of economy,the historical cost measurement attribute has been increasingly unable to meet the information needs of report users in china. The fair value measurement can provide the reality information highly relevant for investors ,it can help investors to make right decision,now it has become one of the accounting measurement attribute in the developed countries recognized. Enterprise accounting standards system in 2006 February,China promulgated the new widely introduced the fair value measurement attribute. Its introduction has a certain influence on the current economy of our country.%伴随着经济的飞速发展,历史成本计量属性已越来越不能满足我国报表使用者的信息需要。公允价值计量由于能够为投资者提供与现实高度相关的信息,从而帮助投资者作出正确的决策,目前已经成为发达国家公认的会计计量属性之一。2006年2月我国新颁布的企业会计准则体系大量地引入了公允价值计量属性。它的引入对我国现有经济产生一定的影响。

  6. Roundhouse (RND) Mountain Top Research Site: Measurements and Uncertainties for Winter Alpine Weather Conditions

    Science.gov (United States)

    Gultepe, I.; Isaac, G. A.; Joe, P.; Kucera, P. A.; Theriault, J. M.; Fisico, T.

    2014-01-01

    The objective of this work is to better understand and summarize the mountain meteorological observations collected during the Science of Nowcasting Winter Weather for the Vancouver 2010 Olympics and Paralympics (SNOW-V10) project that was supported by the Fog Remote Sensing and Modeling (FRAM) project. The Roundhouse (RND) meteorological station was located 1,856 m above sea level that is subject to the winter extreme weather conditions. Below this site, there were three additional observation sites at 1,640, 1,320, and 774 m. These four stations provided some or all the following measurements at 1 min resolution: precipitation rate (PR) and amount, cloud/fog microphysics, 3D wind speed (horizontal wind speed, U h; vertical air velocity, w a), visibility (Vis), infrared (IR) and shortwave (SW) radiative fluxes, temperature ( T) and relative humidity with respect to water (RHw), and aerosol observations. In this work, comparisons are made to assess the uncertainties and variability for the measurements of Vis, RHw, T, PR, and wind for various winter weather conditions. The ground-based cloud imaging probe (GCIP) measurements of snow particles using a profiling microwave radiometer (PMWR) data have also been shown to assess the icing conditions. Overall, the conclusions suggest that uncertainties in the measurements of Vis, PR, T, and RH can be as large as 50, >60, 50, and >20 %, respectively, and these numbers may increase depending on U h, T, Vis, and PR magnitude. Variability of observations along the Whistler Mountain slope (~500 m) suggested that to verify the models, model space resolution should be better than 100 m and time scales better than 1 min. It is also concluded that differences between observed and model based parameters are strongly related to a model's capability of accurate prediction of liquid water content (LWC), PR, and RHw over complex topography.

  7. Influence of Spherical Radiation Pattern Measurement Uncertainty on Handset Performance Measures

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Pedersen, Gert Frølund

    2005-01-01

    An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS...... in the performance measures are investigated for both the GSM-900 and the GSM-1800 band. Despite the deliberately large deviations from the reference position, the changes in TRP and TIS are generally within ±0.5 dB with a maximum of about 1.4 dB. For the MEG values the results depend on the orientation...

  8. Sources of uncertainty in eddy covariance ozone flux measurements made by dry chemiluminescence fast response analysers

    Directory of Open Access Journals (Sweden)

    J. B. A. Muller

    2009-09-01

    Full Text Available Eddy covariance ozone flux measurements are the most direct way to estimate ozone removal near the surface. Over vegetated surfaces, high quality ozone fluxes are required to probe the underlying processes for which it is necessary to separate the flux into the components of stomatal and non-stomatal deposition. Detailed knowledge of the processes that control non-stomatal deposition is limited and more accurate ozone flux measurements are needed to quantify this component of the deposited flux. We present a systematic intercomparison study of eddy covariance ozone flux measurements made using two fast response dry chemiluminescence analysers. Ozone deposition was measured over a well characterised managed grassland near Edinburgh, Scotland, during August 2007. A data quality control procedure specific to these analysers is introduced. Absolute ozone fluxes were calculated based on the relative signals of the dry chemiluminescence analysers using three different calibration methods and the results are compared for both analysers. It is shown that the error in the fitted parameters required for the flux calculations provides a substantial source of uncertainty in the fluxes. The choice of the calculation method itself can also constitute an uncertainty in the flux as the calculated fluxes by the three methods do not agree within error at all times. This finding highlights the need for a consistent and rigorous approach for comparable data-sets, such as e.g. in flux networks. Ozone fluxes calculated by one of the methods were then used to compare the two analysers in more detail. This systematic analyser comparison reveals half-hourly flux values differing by up to a factor of two at times with the difference in mean hourly flux ranging from 0 to 23% with an error in the mean daily flux of ±12%. The comparison of analysers shows that the agreement in fluxes is excellent for some days but that there is an underlying uncertainty as a result of

  9. Evaluation of Uncertainties in Measuring Particulate Matter Emission Factors from Atmospheric Fugitive Sources Using Optical Remote Sensing

    Science.gov (United States)

    Yuen, W.; Ma, Q.; Du, K.; Koloutsou-Vakakis, S.; Rood, M. J.

    2015-12-01

    Measurements of particulate matter (PM) emissions generated from fugitive sources are of interest in air pollution studies, since such emissions vary widely both spatially and temporally. This research focuses on determining the uncertainties in quantifying fugitive PM emission factors (EFs) generated from mobile vehicles using a vertical scanning micro-pulse lidar (MPL). The goal of this research is to identify the greatest sources of uncertainty of the applied lidar technique in determining fugitive PM EFs, and to recommend methods to reduce the uncertainties in this measurement. The MPL detects the PM plume generated by mobile fugitive sources that are carried downwind to the MPL's vertical scanning plane. Range-resolved MPL signals are measured, corrected, and converted to light extinction coefficients, through inversion of the lidar equation and calculation of the lidar ratio. In this research, both the near-end and far-end lidar equation inversion methods are considered. Range-resolved PM mass concentrations are then determined from the extinction coefficient measurements using the measured mass extinction efficiency (MEE) value, which is an intensive PM property. MEE is determined by collocated PM mass concentration and light extinction measurements, provided respectively by a DustTrak and an open-path laser transmissometer. These PM mass concentrations are then integrated with wind information, duration of plume event, and vehicle distance travelled to obtain fugitive PM EFs. To obtain the uncertainty of PM EFs, uncertainties in MPL signals, lidar ratio, MEE, and wind variation are considered. Error propagation method is applied to each of the above intermediate steps to aggregate uncertainty sources. Results include determination of uncertainties in each intermediate step, and comparison of uncertainties between the use of near-end and far-end lidar equation inversion methods.

  10. Welfare and Market Impacts of Food Safety Measures in China:Results from Urban Consumers’ Valuation of Product Attributes

    Institute of Scientific and Technical Information of China (English)

    David L.Ortega; H.Holly Wang; Nicole J.Olynk Widmar

    2014-01-01

    This study provides an economics assessment of various food safety measures in China. A choice experiment approach is used to elicit Chinese consumer preferences for various food safety attributes using data from a 2008 urban consumer survey. An alternative welfare calculation is used to model aggregate market impacts of select food safety measures. Our results show that the largest welfare gains are found in the current government-run certiifcation program. The implementation of a third-party certiifcation system, a traceability network and a product label would generate signiifcant value and would help reduce current system inefifciencies in China. This study builds on previous research and provides an alternative approach for calculating consumer valuation of safety and quality attributes that can be used to estimate aggregate economic and welfare impacts.

  11. Evaluating the capabilities and uncertainties of droplet measurements for the fog droplet spectrometer (FM-100

    Directory of Open Access Journals (Sweden)

    J. K. Spiegel

    2012-05-01

    Full Text Available Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the evaluation of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100: first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of Mie theory. We deduced error assumptions and proposed how to correct measured size distributions for these errors by redistributing the measured droplet size distribution using a stochastic approach. Second, based on a literature study, we derived corrections for particle losses during sampling with the FM-100. We applied both corrections to cloud droplet size spectra measured at the high alpine site Jungfraujoch for a temperature range from 0 °C to 11 °C. We show that Mie scattering led to spikes in the droplet size distributions using the default sizing procedure, while the stochastic approach reproduced the ambient size distribution adequately. A detailed analysis of the FM-100 sampling efficiency revealed that particle losses were typically below 10% for droplet diameters up to 10 μm. For larger droplets, particle losses can increase up to 90% for the largest droplets of 50 μm at ambient windspeeds below 4.4 m s−1 and even to >90% for larger angles between the instrument orientation and the wind vector (sampling angle at higher wind speeds. Comparisons of the FM-100 to other reference instruments revealed that the total liquid water content (LWC measured by the FM-100 was more sensitive to particle losses than to re-sizing based on Mie scattering, while the total number concentration was only marginally influenced by particle losses. As a consequence, for further LWC measurements with the FM-100 we strongly recommend to consider (1 the error arising due to Mie

  12. Evaluating the capabilities and uncertainties of droplet measurements for the fog droplet spectrometer (FM-100

    Directory of Open Access Journals (Sweden)

    J. K. Spiegel

    2012-09-01

    Full Text Available Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the error analysis of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100: first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of the Mie theory. We deduced error assumptions and proposed a new method on how to correct measured size distributions for these errors by redistributing the measured droplet size distribution using a stochastic approach. Second, based on a literature study, we summarized corrections for particle losses during sampling with the FM-100. We applied both corrections to cloud droplet size spectra measured at the high alpine site Jungfraujoch for a temperature range from 0 °C to 11 °C. We showed that Mie scattering led to spikes in the droplet size distributions using the default sizing procedure, while the new stochastic approach reproduced the ambient size distribution adequately. A detailed analysis of the FM-100 sampling efficiency revealed that particle losses were typically below 10% for droplet diameters up to 10 μm. For larger droplets, particle losses can increase up to 90% for the largest droplets of 50 μm at ambient wind speeds below 4.4 m s−1 and even to >90% for larger angles between the instrument orientation and the wind vector (sampling angle at higher wind speeds. Comparisons of the FM-100 to other reference instruments revealed that the total liquid water content (LWC measured by the FM-100 was more sensitive to particle losses than to re-sizing based on Mie scattering, while the total number concentration was only marginally influenced by particle losses. Consequently, for further LWC measurements with the FM-100 we strongly recommend to consider (1 the

  13. Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective.

    Science.gov (United States)

    Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso

    2016-09-22

    Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005-0.027 m for measurements by Photogrammetry and 0.013-0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device.

  14. Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective

    Science.gov (United States)

    Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso

    2016-01-01

    Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245

  15. Thermal inactivation of human norovirus surrogates in spinach and measurement of its uncertainty.

    Science.gov (United States)

    Bozkurt, Hayriye; D'souza, Doris H; Davidson, P Michael

    2014-02-01

    Leafy greens, including spinach, have potential for human norovirus transmission through improper handling and/or contact with contaminated water. Inactivation of norovirus prior to consumption is essential to protect public health. Because of the inability to propagate human noroviruses in vitro, murine norovirus (MNV-1) and feline calicivirus (FCV-F9) have been used as surrogates to model human norovirus behavior under laboratory conditions. The objectives of this study were to determine thermal inactivation kinetics of MNV-1 and FCV-F9 in spinach, compare first-order and Weibull models, and measure the uncertainty associated with the process. D-values were determined for viruses at 50, 56, 60, 65, and 72 °C in 2-ml vials. The D-values calculated from the first-order model (50 to 72 °C) ranged from 0.16 to 14.57 min for MNV-1 and 0.15 to 17.39 min for FCV-9. Using the Weibull model, the tD for MNV-1 and FCV-F9 to destroy 1 log (D ≈ 1) at the same temperatures ranged from 0.22 to 15.26 and 0.27 to 20.71 min, respectively. The z-values determined for MNV-1 were 11.66 ± 0.42 °C using the Weibull model and 10.98 ± 0.58 °C for the first-order model and for FCV-F9 were 10.85 ± 0.67 °C and 9.89 ± 0.79 °C, respectively. There was no difference in D- or z-value using the two models (P > 0.05). Relative uncertainty for dilution factor, personal counting, and test volume were 0.005, 0.0004, and ca. 0.84%, respectively. The major contribution to total uncertainty was from the model selected. Total uncertainties for FCV-F9 for the Weibull and first-order models were 3.53 to 7.56% and 11.99 to 21.01%, respectively, and for MNV-1, 3.10 to 7.01% and 13.14 to 16.94%, respectively. Novel and precise information on thermal inactivation of human norovirus surrogates in spinach was generated, enabling more reliable thermal process calculations to control noroviruses. The results of this study may be useful to the frozen food industry in designing blanching processes for

  16. Rational development and validation of a new microbiological assay for linezolid and its measurement uncertainty.

    Science.gov (United States)

    Saviano, Alessandro Morais; Francisco, Fabiane Lacerda; Lourenço, Felipe Rebello

    2014-09-01

    The aim of this work was to develop and validate a new microbiological assay to determine potency of linezolid in injectable solution. 2(4) factorial and central composite designs were used to optimize the microbiological assay conditions. In addition, we estimated the measurement uncertainty based on residual error of analysis of variance of inhibition zone diameters. Optimized conditions employed 4 mL of antibiotic 1 medium inoculated with 1% of Staphylococcus aureus suspension, and linezolid in concentrations from 25 to 100 µg mL(-1). The method was specific, linear (Y=10.03X+5.00 and Y=9.20X+6.53, r(2)=0.9950 and 0.9987, for standard and sample curves, respectively), accurate (mean recovery=102.7%), precise (repeatability=2.0% and intermediate precision=1.9%) and robust. Microbiological assay׳s overall uncertainty (3.1%) was comparable to those obtained for other microbiological assays (1.7-7.1%) and for determination of linezolid by spectrophotometry (2.1%) and reverse-phase ultra-performance liquid chromatography (RP-UPLC) (2.5%). Therefore, it is an acceptable alternative method for the routine quality control of linezolid in injectable solution.

  17. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty.

    Science.gov (United States)

    Hoque, Yamen M; Tripathi, Shivam; Hantush, Mohamed M; Govindaraju, Rao S

    2016-03-01

    Risk-based measures such as reliability, resilience, and vulnerability (R-R-V) have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using a relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. This data-driven approach to calculating aggregate R-R-V values was found to be useful for providing a composite picture of watershed health. Aggregate R-R-V values also enabled comparison between locations with different types of WQ data.

  18. Quantum Measurements, Stochastic Networks, the Uncertainty Principle, and the Not So Strange “Weak Values”

    Directory of Open Access Journals (Sweden)

    Dmitri Sokolovski

    2016-09-01

    Full Text Available Suppose we make a series of measurements on a chosen quantum system. The outcomes of the measurements form a sequence of random events, which occur in a particular order. The system, together with a meter or meters, can be seen as following the paths of a stochastic network connecting all possible outcomes. The paths are shaped from the virtual paths of the system, and the corresponding probabilities are determined by the measuring devices employed. If the measurements are highly accurate, the virtual paths become “real”, and the mean values of a quantity (a functional are directly related to the frequencies with which the paths are traveled. If the measurements are highly inaccurate, the mean (weak values are expressed in terms of the relative probabilities’ amplitudes. For pre- and post-selected systems they are bound to take arbitrary values, depending on the chosen transition. This is a direct consequence of the uncertainty principle, which forbids one from distinguishing between interfering alternatives, while leaving the interference between them intact.

  19. Single hadron response measurement and calorimeter jet energy scale uncertainty with the ATLAS detector at the LHC

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allbrooke, Benedict; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Å sman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Aubert, Bernard; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Beale, Steven; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Sebastian; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertella, Claudia; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borri, Marcello; Borroni, Sara; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Caminada, Lea Michaela; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cuthbert, Cameron; Cwetanski, Peter; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; de Mora, Lee; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Delemontex, Thomas; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Felix; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Bø rge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Hall, David; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jø rgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hawkins, Donovan; Hayakawa, Takashi; Hayashi, Takayasu; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Hernandez, Carlos Medina; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huettmann, Antje; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ilic, Nikolina; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jansen, Hendrik; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Jovicevic, Jelena; Jovin, Tatjana; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Juranek, Vojtech; Jussel, Patrick; Juste Rozas, Aurelio; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Lawrence; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kraus, Jana; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Laurelli, Paolo; Lavorini, Vincenzo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lemmer, Boris; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manhaes de Andrade Filho, Luciano; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martin-Haugh, Stewart; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Morin, Jerome; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Mueller, Timo; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Narayan, Rohin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakes, Louise Beth; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olcese, Marco; Olchevski, Alexander; Olivares Pino, Sebastian Andres; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Paleari, Chiara; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penning, Bjoern; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Peters, Krisztian; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Joe; Price, Lawrence; Price, Michael John; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przybycien, Mariusz; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Radloff, Peter; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rave, Tobias Christian; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romanov, Victor; Romeo, Gaston; Romero Adam, Elena; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Christian; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sanchez, Arturo; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultens, Martin Johannes; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Scott, Bill; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shiyakova, Maria; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Smakhtin, Vladimir; Smart, Ben; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soualah, Rachik; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Styles, Nicholas Adam; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Suzuki, Yuta; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teinturier, Marthe; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tian, Feng; Tibbetts, Mark James; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vanadia, Marco; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Virzi, Joseph; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendland, Dennis; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Wei-Cheng; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wraight, Kenneth; Wright, Catherine; Wright, Michael; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zabinski, Bartlomiej; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zeman, Martin; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2013-01-01

    The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of $\\sqrt{s}$ = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K_s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5% for central isolated hadrons and 1-3% for the final calorimeter jet energy scale.

  20. Uncertainty from sampling in measurements of aflatoxins in animal feedingstuffs: application of the Eurachem/CITAC guidelines.

    Science.gov (United States)

    Reiter, Elisabeth Viktoria; Dutton, Mike Francis; Agus, Ali; Nordkvist, Erik; Mwanza, Mulunda Feza; Njobeh, Patrick Berka; Prawano, Deni; Häggblom, Per; Razzazi-Fazeli, Ebrahim; Zentek, Jürgen; Andersson, Mats Gunnar

    2011-10-07

    The duplicate method for estimating uncertainty from measurement including sampling is presented in the Eurachem/CITAC guide. The applicability of this method as a tool for verifying sampling plans for mycotoxins was assessed in three case studies with aflatoxin B(1) in animal feedingstuffs. Aspects considered included strategies for obtaining samples from contaminated lots, assumptions about distributions, approaches for statistical analysis, log(10)-transformation of test data and applicability of uncertainty estimates. The results showed that when duplicate aggregate samples are formed by interpenetrating sampling, repeated measurements from a lot can be assumed to approximately follow a normal or lognormal distribution. Due to the large variation in toxin concentration between sampling targets and sometimes very large uncertainty arising from sampling and sample preparation (U(rel) ≥ 50%), estimation of uncertainty from log(10)-transformed data was found to be a more generally applicable approach than application of robust ANOVA.

  1. Uncertainties in turbidity-based measurements of suspended sediment load used to quantify the sediment budget on the catchment scale

    Science.gov (United States)

    de Hipt, Felix Op; Diekkrüger, Bernd; Steup, Gero; Rode, Michael

    2016-04-01

    Water-driven soil erosion, transport and deposition take place on different spatial and temporal scales. Therefore, related measurements are complex and require process understanding and a multi-method approach combining different measurement methods with soil erosion modeling. Turbidity as a surrogate measurement for suspended sediment concentration (SSC) in rivers is frequently used to overcome the disadvantages of conventional sediment measurement techniques regarding temporal resolution and continuity. The use of turbidity measurements requires a close correlation between turbidity and SSC. Depending on the number of samples collected, the measured range and the variations in the measurements, SSC-turbidity curves are subject to uncertainty. This uncertainty has to be determined in order to assess the reliability of measure-ments used to quantify catchment sediment yields and to calibrate soil erosion models. This study presents the calibration results from a sub-humid catchment in south-western Burkina Faso and investigates the related uncertainties. Daily in situ measurements of SSC manually collected at one turbidity station and the corresponding turbidity readings are used to obtain the site-specific calibration curve. The discharge is calculated based on an empirical water level-discharge relationship. The derived regression equations are used to define prediction intervals for SSC and discharge. The uncertainty of the suspended sediment load time series is influenced by the corresponding uncertainties of SSC and discharge. This study shows that the determination of uncertainty is relevant when turbidity-based measurements of suspended sediment loads are used to quantify catchment erosion and to calibrate erosion models.

  2. Measurement uncertainty analysis in incoherent Doppler lidars by a new scattering approach.

    Science.gov (United States)

    Belmonte, Aniceto; Lázaro, Antonio

    2006-08-21

    We need to examine the uncertainty added to the Doppler measurement process of atmospheric wind speeds of a practical incoherent detection lidar. For this application, the multibeam Fizeau wedge has the advantage over the Fabry-Perot interferometer of defining linear fringe patterns. Unfortunately, the convenience of using the transfer function for angular spectrum transmission has not been available because the nonparallel mirror geometry of Fizeau wedges. In this paper, we extent the spatial-frequency arguments used in Fabry-Perot etalons to the Fizeau geometry by using a generalized scattering matrix method based on the propagation of optical vortices. Our technique opens the door to consider complex, realistic configurations for any Fizeau-based instrument.

  3. Comment on ‘A low-uncertainty measurement of the Boltzmann constant’

    Science.gov (United States)

    Macnaughton, Donald B.

    2016-02-01

    The International Committee for Weights and Measures has projected a major revision of the International System of Units in which all the base units will be defined by fixing the values of certain fundamental constants of nature. To assist, de Podesta et al recently experimentally obtained a precise new estimate of the Boltzmann constant. This estimate is proposed as a basis for the redefinition of the unit of temperature, the kelvin. The present paper reports a reanalysis of de Podesta et al’s data that reveals systematic non-random patterns in the residuals of the key fitted model equation. These patterns violate the assumptions underlying the analysis and thus they raise questions about the validity of de Podesta et al’s estimate of the Boltzmann constant. An approach is discussed to address these issues, which should lead to an accurate estimate of the Boltzmann constant with a lower uncertainty.

  4. Monte Carlo Method for Calculating Oxygen Abundances and Their Uncertainties from Strong-Line Flux Measurements

    CERN Document Server

    Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or

    2015-01-01

    We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...

  5. A Monte-Carlo investigation of the uncertainty of acoustic decay measurements

    DEFF Research Database (Denmark)

    Cabo, David Pérez; Seoane, Manuel A. Sobreira; Jacobsen, Finn

    2012-01-01

    , taking into account the influence of the magnitude response and the phase distortion. It will be shown how the error not only depends on the filter but also on the modal density and the position of the resonances of the system under test within the frequency band. A Monte-Carlo computer simulation has...... been be set up: the model function is a model of the acoustic decays, where the modal density, the resonances of the system, and the amplitude and phase of the normal modes may be considered as random variables. Once the random input variables and the model function are defined, the uncertainty...... of acoustic decay measurements can be estimated. Different filters will be analysed: linear phase FIR and IIR filters both in their direct and time-reversed versions. © European Acoustics Association....

  6. Uncertainty calculations for the measurement of in vivo bone lead by x-ray fluorescence.

    Science.gov (United States)

    O'Meara, J M; Fleming, D E B

    2009-04-21

    In order to quantify the bone lead concentration from an in vivo x-ray fluorescence measurement, typically two estimates of the lead concentration are determined by comparing the normalized x-ray peak amplitudes from the Kalpha(1) and Kbeta(1) features to those of the calibration phantoms. In each case, the normalization consists of taking the ratio of the x-ray peak amplitude to the amplitude of the coherently scattered photon peak in the spectrum. These two Pb concentration estimates are then used to determine the weighted mean lead concentration of that sample. In calculating the uncertainties of these measurements, it is important to include any covariance terms where appropriate. When determining the uncertainty of the lead concentrations from each x-ray peak, the standard approach does not include covariance between the x-ray peaks and the coherently scattered feature. These spectral features originate from two distinct physical processes, and therefore no covariance between these features can exist. Through experimental and simulated data, we confirm that there is no observed covariance between the detected Pb x-ray peaks and the coherently scattered photon signal, as expected. This is in direct contrast to recent work published by Brito (2006 Phys. Med. Biol. 51 6125-39). There is, however, covariance introduced in the calculation of the weighted mean lead concentration due to the common coherent normalization. This must be accounted for in calculating the uncertainty of the weighted mean lead concentration, as is currently the case. We propose here an alternative approach to calculating the weighted mean lead concentration in such a way as to eliminate the covariance introduced by the common coherent normalization. It should be emphasized that this alternative approach will only apply in situations in which the calibration line intercept is not included in the calculation of the Pb concentration from the spectral data: when the source of the intercept is

  7. The association between the negative effects attributed to ecstasy use and measures of cognition and mood among users.

    Science.gov (United States)

    Fisk, John E; Montgomery, Catharine; Murphy, Philip N

    2009-10-01

    In self reports, abstinent ecstasy/polydrug users claim that they experience certain ongoing affective and psychological changes including elevated anxiety, arousal, and depression. In addition, various aspects of cognition (e.g., everyday memory, reasoning, executive functioning) appear to be affected. The present paper investigated the link between these two psychological sequelae. Ninety-five ecstasy/polydrug users completed tests of reasoning, intelligence, information processing speed, executive functioning, and everyday memory. Affect was measured via a mood adjective checklist. Adverse effects attributed to ecstasy were measured via responses to adjectives reflecting changes in users since they started using the drug. In addition, indicators of sleep quality and daytime sleepiness were obtained. Users attributed a number of adverse effects to ecstasy, namely heightened irritability, depression, paranoia, and deteriorating health. Adverse effects were significantly and negatively correlated with aspects of intelligence, everyday memory, and sleep quality. Length of use of ecstasy use was positively correlated with adverse effects. While many users attribute a number of adverse affects to their use of ecstasy, it remains unclear whether these self-perceptions are a corollary of the psychopharmacological effects of the drug or reflect factors which in fact predate its use.

  8. Spatial resolution and measurement uncertainty of strains in bone and bone-cement interface using digital volume correlation.

    Science.gov (United States)

    Zhu, Ming-Liang; Zhang, Qing-Hang; Lupton, Colin; Tong, Jie

    2016-04-01

    The measurement uncertainty of strains has been assessed in a bone analogue (sawbone), bovine trabecular bone and bone-cement interface specimens under zero load using the Digital Volume Correlation (DVC) method. The effects of sub-volume size, sample constraint and preload on the measured strain uncertainty have been examined. There is generally a trade-off between the measurement uncertainty and the spatial resolution. Suitable sub-volume sizes have been be selected based on a compromise between the measurement uncertainty and the spatial resolution of the cases considered. A ratio of sub-volume size to a microstructure characteristic (Tb.Sp) was introduced to reflect a suitable spatial resolution, and the measurement uncertainty associated was assessed. Specifically, ratios between 1.6 and 4 appear to give rise to standard deviations in the measured strains between 166 and 620 με in all the cases considered, which would seem to suffice for strain analysis in pre as well as post yield loading regimes. A microscale finite element (μFE) model was built from the CT images of the sawbone, and the results from the μFE model and a continuum FE model were compared with those from the DVC. The strain results were found to differ significantly between the two methods at tissue level, consistent in trend with the results found in human bones, indicating mainly a limitation of the current DVC method in mapping strains at this level.

  9. New Measurement Method and Uncertainty Estimation for Plate Dimensions and Surface Quality

    Directory of Open Access Journals (Sweden)

    Salah H. R. Ali

    2013-01-01

    Full Text Available Dimensional and surface quality for plate production control is facing difficult engineering challenges. One of these challenges is that plates in large-scale mass production contain geometric uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007, and TIS 2398-2: 2008. A proposed measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of new method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results for centre of curvature (CC, centre of edge (CE, warpage (W, and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results not only showed better accurate new method but also improved the quality of plate products significantly.

  10. Prior information: how to beat the standard joint-measurement uncertainty relation

    CERN Document Server

    Hall, M J W

    2003-01-01

    The canonical joint measurement of position X and momentum P corresponds to measuring the commuting operators X_J=X+X', P_J=P-P', where the primed variables refer to an auxilary system in a minimum-uncertainty state. It is well known that Delta X_J Delta P_J >= hbar. Here it is shown that given the _same_ physical experimental setup, and information about the system_prior_ to measurement, one can make improved joint estimates X_est and P_est of X and P. These improved estimates are not only statistically closer to X and P: they satisfy Delta X_est Delta P_est >= hbar/4, where equality can be achieved in certain cases. Thus one can do up to four times better than the standard lower bound (where the latter corresponds to the limit of_no_ prior information). A formula is given for the optimal estimate of any observable, based on arbitrary measurement data and prior information about the state of the system, which generalises and provides a more robust interpretation of previous formulas for `local expectations' ...

  11. Improved evaluation of measurement uncertainty from sampling by inclusion of between-sampler bias using sampling proficiency testing.

    Science.gov (United States)

    Ramsey, Michael H; Geelhoed, Bastiaan; Wood, Roger; Damant, Andrew P

    2011-04-01

    A realistic estimate of the uncertainty of a measurement result is essential for its reliable interpretation. Recent methods for such estimation include the contribution to uncertainty from the sampling process, but they only include the random and not the systematic effects. Sampling Proficiency Tests (SPTs) have been used previously to assess the performance of samplers, but the results can also be used to evaluate measurement uncertainty, including the systematic effects. A new SPT conducted on the determination of moisture in fresh butter is used to exemplify how SPT results can be used not only to score samplers but also to estimate uncertainty. The comparison between uncertainty evaluated within- and between-samplers is used to demonstrate that sampling bias is causing the estimates of expanded relative uncertainty to rise by over a factor of two (from 0.39% to 0.87%) in this case. General criteria are given for the experimental design and the sampling target that are required to apply this approach to measurements on any material.

  12. Analysis of the Intrinsic Uncertainties in the Laser-Driven Iron Hugoniot Experiment Based on the Measurement of Velocities

    Institute of Scientific and Technical Information of China (English)

    Huan Zhang; Xiao-Xi Duan; Chen Zhang; Hao Liu; Hui-Ge Zhang; Quan-Xi Xue; Qing Ye

    2016-01-01

    One of the most challenging tasks in the laser-driven Hugoniot experiment is how to increase the reproducibility and precision of the experimental data to meet the stringent requirement in validating equation of state models.In such cases,the contribution of intrinsic uncertainty becomes important and cannot be ignored.A detailed analysis of the intrinsic uncertainty of the aluminum-iron impedance-match experiment based on the measurement of velocities is presented.The influence of mirror-reflection approximation on the shocked pressure of Fe and intrinsic uncertainties from the equation of state uncertainty of standard material are quantified.Furthermore,the comparison of intrinsic uncertainties of four different experimental approaches is presented.It is shown that,compared with other approaches including the most widely used approach which relies on the measurements of the shock velocities of Al and Fe,the approach which relies on the measurement of the particle velocity of Al and the shock velocity of Fe has the smallest intrinsic uncertainty,which would promote such work to significantly improve the diagnostics precision in such an approach.

  13. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    Science.gov (United States)

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over

  14. Estimation of the measurement uncertainty by the bottom-up approach for the determination of methamphetamine and amphetamine in urine.

    Science.gov (United States)

    Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck

    2010-05-01

    The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.

  15. Sensitivity of Large-Aperture Scintillometer Measurements of Area-Average Heat Fluxes to Uncertainties in Topographic Heights

    CERN Document Server

    Gruber, Matthew A; Hartogensis, Oscar K

    2013-01-01

    Scintillometers measure $C_n^2$ over large areas of turbulence in the atmospheric surface layer. Turbulent fluxes of heat and momentum are inferred through coupled sets of equations derived from the Monin-Obukhov similarity hypothesis. One-dimensional sensitivity functions have been produced which relate the sensitivity of heat fluxes to uncertainties in single values of beam height over homogeneous and flat terrain. Real field sites include variable topography and heterogeneous surface properties such as roughness length. We develop here the first analysis of the sensitivity of scintillometer derived sensible heat fluxes to uncertainties in spacially distributed topographic measurements. For large-aperture scintillometers and independent $u_\\star$ measurements, sensitivity is shown to be concentrated in areas near the center of the beam and where the underlying topography is closest to the beam height. Uncertainty may be greatly reduced by focusing precise topographic measurements in these areas. The new two...

  16. Reliable and valid NEWS for Chinese seniors: measuring perceived neighborhood attributes related to walking

    Directory of Open Access Journals (Sweden)

    Lee Lok-chun

    2010-11-01

    Full Text Available Abstract Background The effects of the built environment on walking in seniors have not been studied in an Asian context. To examine these effects, valid and reliable measures are needed. The aim of this study was to develop and validate a questionnaire of perceived neighborhood characteristics related to walking appropriate for Chinese seniors (Neighborhood Environment Walkability Scale for Chinese Seniors, NEWS-CS. It was based on the Neighborhood Environment Walkability Scale - Abbreviated (NEWS-A, a validated measure of perceived built environment developed in the USA for adults. A secondary study aim was to establish the generalizability of the NEWS-A to an Asian high-density urban context and a different age group. Methods A multidisciplinary panel of experts adapted the original NEWS-A to reflect the built environment of Hong Kong and needs of seniors. The translated instrument was pre-tested on a sample of 50 Chinese-speaking senior residents (65+ years. The final version of the NEWS-CS was interviewer-administered to 484 seniors residing in four selected Hong Kong districts varying in walkability and socio-economic status. Ninety-two participants completed the questionnaire on two separate occasions, 2-3 weeks apart. Test-rest reliability indices were estimated for each item and subscale of the NEWS-CS. Confirmatory factor analysis was used to develop the measurement model of the NEWS-CS and cross-validate that of the NEWS-A. Results The final version of the NEWS-CS consisted of 14 subscales and four single items (76 items. Test-retest reliability was moderate to good (ICC > 50 or % agreement > 60 except for four items measuring distance to destinations. The originally-proposed measurement models of the NEWS-A and NEWS-CS required 2-3 theoretically-justifiable modifications to fit the data well. Conclusions The NEWS-CS possesses sufficient levels of reliability and factorial validity to be used for measuring perceived neighborhood

  17. Measuring the Uncertainty of Probabilistic Maps Representing Human Motion for Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Susanna Kaiser

    2016-01-01

    Full Text Available Indoor navigation and mapping have recently become an important field of interest for researchers because global navigation satellite systems (GNSS are very often unavailable inside buildings. FootSLAM, a SLAM (Simultaneous Localization and Mapping algorithm for pedestrians based on step measurements, addresses the indoor mapping and positioning problem and can provide accurate positioning in many structured indoor environments. In this paper, we investigate how to compare FootSLAM maps via two entropy metrics. Since collaborative FootSLAM requires the alignment and combination of several individual FootSLAM maps, we also investigate measures that help to align maps that partially overlap. We distinguish between the map entropy conditioned on the sequence of pedestrian’s poses, which is a measure of the uncertainty of the estimated map, and the entropy rate of the pedestrian’s steps conditioned on the history of poses and conditioned on the estimated map. Because FootSLAM maps are built on a hexagon grid, the entropy and relative entropy metrics are derived for the special case of hexagonal transition maps. The entropy gives us a new insight on the performance of FootSLAM’s map estimation process.

  18. International target values 2010 for achievable measurement uncertainties in nuclear material accountancy

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Fabio C., E-mail: fabio@ird.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Almeida, Silvio G. de; Renha Junior, Geraldo, E-mail: silvio@abacc.org.b, E-mail: grenha@abacc.org.b [Agencia Brasileiro-Argentina de Contabilidade e Controle de Materiais Nucleares (ABACC), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The International Target Values (ITVs) are reasonable uncertainty estimates that can be used in judging the reliability of measurement techniques applied to industrial nuclear and fissile materials subject to accountancy and/or safeguards verification. In the absence of relevant experimental estimates, ITVs can also be used to select measurement techniques and calculate sample population during the planning phase of verification activities. It is important to note that ITVs represent estimates of the 'state-of-the-practice', which should be achievable under routine measurement conditions affecting both facility operators and safeguards inspectors, not only in the field, but also in laboratory. Tabulated values cover measurement methods used for the determination of volume or mass of the nuclear material, for its elemental and isotopic assays, and for its sampling. The 2010 edition represents the sixth revision of the International Target Values (ITVs), issued by the International Atomic Energy Agency (IAEA) as a Safeguards Technical Report (STR-368). The first version was released as 'Target Values' in 1979 by the Working Group on Techniques and Standards for Destructive Analysis (WGDA) of the European Safeguards Research and Development Association (ESARDA) and focused on destructive analytical methods. In the latest 2010 revision, international standards in estimating and expressing uncertainties have been considered while maintaining a format that allows comparison with the previous editions of the ITVs. Those standards have been usually applied in QC/QA programmes, as well as qualification of methods, techniques and instruments. Representatives of the Brazilian Nuclear Energy Commission (CNEN) and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC) participated in previous Consultants Group Meetings since the one convened to establish the first list of ITVs released in 1993 and in subsequent revisions

  19. Determination of boron in uranium aluminum silicon alloy by spectrophotometry and estimation of expanded uncertainty in measurement

    Science.gov (United States)

    Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.

    2008-08-01

    Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.

  20. Minimizing measurement uncertainties of coniferous needle-leaf optical properties, part II: experimental set-up and error analysis

    NARCIS (Netherlands)

    Yanez Rausell, L.; Malenovsky, Z.; Clevers, J.G.P.W.; Schaepman, M.E.

    2014-01-01

    We present uncertainties associated with the measurement of coniferous needle-leaf optical properties (OPs) with an integrating sphere using an optimized gap-fraction (GF) correction method, where GF refers to the air gaps appearing between the needles of a measured sample. We used an optically stab

  1. Conversion factor and uncertainty estimation for quantification of towed gamma-ray detector measurements in Tohoku coastal waters

    Science.gov (United States)

    Ohnishi, S.; Thornton, B.; Kamada, S.; Hirao, Y.; Ura, T.; Odano, N.

    2016-05-01

    Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.

  2. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    Science.gov (United States)

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  3. Suspended matter concentrations in coastal waters: Methodological improvements to quantify individual measurement uncertainty

    Science.gov (United States)

    Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo

    2014-12-01

    Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to errors for the TSM concentration was error was error was always errors of only a few percent were obtained. The approach proposed here can determine the individual determination error for each sample, is independent of bias errors, can be used for TSM and PIM determination, and allows individual quality control for samples from coastal and estuarine waters. It should be possible to use the approach in oceanic or fresh water environments as well. The possibility of individual quality control will allow mass-specific optical properties to be determined with

  4. Fission Meter Information Barrier Attribute Measurement System: Task 1 Report: Document existing Fission Meter neutron IB system

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, P. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-28

    An SNM attribute Information Barrier (IB) system was developed for a 2011 US/UK Exercise. The system was modified and extensively tested in a 2013-2014 US-UK Measurement Campaign. This work demonstrated rapid deployment of an IB system for potential treaty use. The system utilizes an Ortec Fission Meter neutron multiplicity counter and custom computer code. The system demonstrates a proof-of-principle automated Pu-240 mass determination with an information barrier. After a software start command is issued, the system automatically acquires and downloads data, performs an analysis, and displays the results. This system conveys the results of a Pu mass threshold measurements in a way the does not reveal sensitive information. In full IB mode, only red/green ‘lights’ are displayed in the software. In test mode, more detailed information is displayed. The code can also read in, analyze, and display results from previously acquired or simulated data. Because the equipment is commercial-off-the-shelf (COTS), the system demonstrates a low-cost short-lead-time technology for treaty SNM attribute measurements. A deployed system will likely require integration of additional authentication and tamper-indicating technologies. This will be discussed for the project in this and future progress reports.

  5. On challenges in the uncertainty evaluation for time-dependent measurements

    Science.gov (United States)

    Eichstädt, S.; Wilkens, V.; Dienstfrey, A.; Hale, P.; Hughes, B.; Jarvis, C.

    2016-08-01

    The measurement of quantities with time-dependent values is a common task in many areas of metrology. Although well established techniques are available for the analysis of such measurements, serious scientific challenges remain to be solved to enable their routine use in metrology. In this paper we focus on the challenge of estimating a time-dependent measurand when the relationship between the value of the measurand and the indication is modeled by a convolution. Mathematically, deconvolution is an ill-posed inverse problem, requiring regularization to stabilize the inversion in the presence of noise. We present and discuss deconvolution in three practical applications: thrust-balance, ultra-fast sampling oscilloscopes and hydrophones. Each case study takes a different approach to modeling the convolution process and regularizing its inversion. Critically, all three examples lack the assignment of an uncertainty to the influence of the regularization on the estimation accuracy. This is a grand challenge for dynamic metrology, for which to date no generic solution exists. The case studies presented here cover a wide range of time scales and prior knowledge about the measurand, and they can thus serve as starting points for future developments in metrology. The aim of this work is to present the case studies and demonstrate the challenges they pose for metrology.

  6. Reporting unit size and measurement uncertainty: current Australian practice in clinical chemistry and haematology.

    Science.gov (United States)

    Hawkins, Robert C; Badrick, Tony

    2015-08-01

    In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.

  7. Measurement uncertainty in anti-doping quantitative analysis for prohibited threshold substances.

    Science.gov (United States)

    Barroso, Osquel; Miller, John; Squirrell, Alan; Westwood, Steven

    2012-07-01

    The standards of laboratory performance of the World Anti-Doping Agency (WADA)-accredited laboratories are defined in the WADA International Standard for Laboratories and its associated Technical Documents. These sets of rules aim to harmonize the production of valid laboratory test results and evidentiary data as well as the reporting of laboratory analytical findings. The determination of anti-doping rule violations in sport made on the basis of analytical quantitative confirmatory analyses for the presence of prohibited threshold substances, in particular, requires the application of specific compliance decision rules, which are established in the WADA Technical Document on Decision Limits. In this article, the use of measurement uncertainty information in the establishment of compliance Decision Limits and in evaluating the performance of a laboratory's quantitative analytical procedures over time and in relation to other laboratories through WADA's External Quality Assessment Scheme program is reviewed and discussed. Furthermore, a perspective is provided on the emerging challenges associated with the harmonization of the quantitative measurement of large-molecular weight biomolecules.

  8. Uncertainties of retrospective radon concentration measurements by multilayer surface trap detector

    Energy Technology Data Exchange (ETDEWEB)

    Bastrikov, V.; Kruzhalov, A. [Ural State Technical Univ., Yekaterinburg (Russian Federation); Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Yekaterinburg (Russian Federation)

    2006-07-01

    The detector for retrospective radon exposure measurements is developed. The detector consists of the multilayer package of solid-state nuclear track detectors LR-115 type. Nitrocellulose films works both as {alpha}-particle detector and as absorber decreasing the energy of {alpha}-particles. The uncertainties of implanted {sup 210}Pb measurements by two- and three-layer detectors are assessed in dependence on surface {sup 210}Po activity and gross background activity of the glass. The generalized compartment behavior model of radon decay products in the room atmosphere was developed and verified. It is shown that the most influencing parameters on the value of conversion coefficient from {sup 210}Po surface activity to average radon concentration are aerosol particles concentration, deposition velocity of unattached {sup 218}Po and air exchange rate. It is demonstrated that with the use of additional information on surface to volume room ratio, air exchange rate and aerosol particles concentration the systematic bias of conversion coefficient between surface activity of {sup 210}Po and average radon concentration can be decreased up to 30 %. (N.C.)

  9. Sampled-data based average consensus with measurement noises:convergence analysis and uncertainty principle

    Institute of Scientific and Technical Information of China (English)

    LI Tao; ZHANG JiFeng

    2009-01-01

    In this paper,sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order Integrator agents in a noisy distributed communication environment.The Impact of the sampling size and the number of network nodes on the system performances is analyzed.The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process,and the measurements of its neighbors'states are corrupted by random noises.By probability limit theory and the property of graph Laplacian matrix,it is shown that for a connected network,the static mean square error between the individual state and the average of the Initial states of all agents can be made arbitrarily small,provided the sampling size is sufficiently small.Furthermore,by properly choosing the consensus gains,almost sure consensus can be achieved.It is worth pointing out that an uncertainty principle of Gaussian networks is obtained,which implies that in the case of white Gausslan noises,no matter what the sampling size is,the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity,network topology and the number of network nodes.

  10. Measuring attributes of success of college students in nursing programs: a psychometric analysis.

    Science.gov (United States)

    Seago, Jean Ann; Wong, Sabrina T; Keane, Dennis; Grumbach, Kevin

    2008-01-01

    Because of the most recent nurse shortage it has become important to determine retention factors of nursing students in the context of various aspects of college nursing programs and institutional systems. The purpose of this article is to describe the psychometric properties of a new measure that could be useful in examining nursing student retention related to the educational institution characteristics, educational processes, and individual student characteristics. The measurement instrument was conceptually designed around 4 constructs and was administered to a test group and a validation group. The dispositional construct loaded differently for each group (test group: math and science ability, confidence in the future, and confidence in ability; validation group: math and science ability, confidence in the future, self-expectation, and confidence in ability). The situational construct factored on 4 subscales (financial issues, social support, missed classes, and work issues); the institutional construct on 4 factors (peer, overall experience, diversity, and faculty); the career values construct on 5 factors (job characteristics, autonomy, caring, flexibility, and work style). Based on the results of the factor analyses and alpha reliability, evidence supported using the dispositional subscales of math and science ability, the career values subscales of job characteristics and work style, the situational subscales of work issues and financial issues, and the institutional subscales of diversity and faculty. The other potential subscales need further refinement and testing.

  11. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  12. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies.

    Science.gov (United States)

    Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O

    2015-02-21

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  13. Glass container special assessment of measurement uncertainty%专用玻璃量器测量不确定度的评估

    Institute of Scientific and Technical Information of China (English)

    谭雯

    2016-01-01

    文章对专用玻璃量器测量的不确定度进行评估,分析了测量过程中存在的不确定度来源,并建立了测量模型,量化不确定度分量,求出合成不确定度和扩展不确定度。%The uncertainty of special glass measuring gauge to assess, analyze the measurement process in the presence of sources of uncertainty, and the establishment of a measurement model to quantify uncertainty components obtained combined uncertainty and expanded uncertainty degree.

  14. Reducing measurement uncertainty drives the use of multiple technologies for supporting metrology

    Science.gov (United States)

    Banke, Bill, Jr.; Archie, Charles N.; Sendelbach, Matthew; Robert, Jim; Slinkman, James A.; Kaszuba, Phil; Kontra, Rick; DeVries, Mick; Solecky, Eric P.

    2004-05-01

    Perhaps never before in semiconductor microlithography has there been such an interest in the accuracy of measurement. This interest places new demands on our in-line metrology systems as well as the supporting metrology for verification. This also puts a burden on the users and suppliers of new measurement tools, which both challenge and complement existing manufacturing metrology. The metrology community needs to respond to these challenges by using new methods to assess the fab metrologies. An important part of this assessment process is the ability to obtain accepted reference measurements as a way of determining the accuracy and Total Measurement Uncertainty (TMU) of an in-line critical dimension (CD). In this paper, CD can mean any critical dimension including, for example, such measures as feature height or sidewall angle. This paper describes the trade-offs of in-line metrology systems as well as the limitations of Reference Measurement Systems (RMS). Many factors influence each application such as feature shape, material properties, proximity, sampling, and critical dimension. These factors, along with the metrology probe size, interaction volume, and probe type such as e-beam, optical beam, and mechanical probe, are considered. As the size of features shrinks below 100nm some of the stalwarts of reference metrology come into question, such as the electrically determined transistor gate length. The concept of the RMS is expanded to show how multiple metrologies are needed to achieve the right balance of accuracy and sampling. This is also demonstrated for manufacturing metrology. Various comparisons of CDSEM, scatterometry, AFM, cross section SEM, electrically determined CDs, and TEM are shown. An example is given which demonstrates the importance in obtaining TMU by balancing accuracy and precision for selecting manufacturing measurement strategy and optimizing manufacturing metrology. It is also demonstrated how the necessary supporting metrology will

  15. Bernoulli particle filter with observer altitude for maritime radiation source tracking in the presence of measurement uncertainty

    Institute of Scientific and Technical Information of China (English)

    Luo Xiaobo; Fan Hongqi; Song Zhiyong; Fu Qiang

    2013-01-01

    For maritime radiation source target tracking in particular electronic counter measures (ECM) environment, there exists two main problems which can deteriorate the tracking perfor-mance of traditional approaches. The first problem is the poor observability of the radiation source. The second one is the measurement uncertainty which includes the uncertainty of the target appear-ing/disappearing and the detection uncertainty (false and missed detections). A novel approach is proposed in this paper for tracking maritime radiation source in the presence of measurement uncertainty. To solve the poor observability of maritime radiation source target, using the radiation source motion restriction, the observer altitude information is incorporated into the bearings-only tracking (BOT) method to obtain the unique target localization. Then the two uncertainties in the ECM environment are modeled by the random finite set (RFS) theory and the Bernoulli filtering method with the observer altitude is adopted to solve the tracking problem of maritime radiation source in such context. Simulation experiments verify the validity of the proposed approach for tracking maritime radiation source, and also demonstrate the superiority of the method compared with the traditional integrated probabilistic data association (IPDA) method. The tracking perfor-mance under different conditions, particularly those involving different duration of radiation source opening and switching-off, indicates that the method to solve our problem is robust and effective.

  16. Evaluation of uncertainty in the inductive measurement of critical current densities of superconducting films using third-harmonic voltages

    Science.gov (United States)

    Yamasaki, H.; Mawatari, Y.; Nakagawa, Y.; Yamada, H.

    2012-10-01

    Several techniques can be used for inductive measurement of the critical current density Jc of large-area superconducting thin films used in microwave devices and fault-current limiters. The most popular of these methods employs the third-harmonic voltages V3. We have proposed a standard method using V3 for determining Jc under a criterion of electric field E. Here, the uncertainty in the standard method is evaluated. Since the measured Jc is directly proportional to the magnetic field at the upper surface of the superconducting film, the most significant systematic effect is the deviation of the coil-to-film distance Z1 from the prescribed value. The principal origins of this deviation of Z1 are (1) inadequate pressing of the coil onto the film and (2) ice layers occasionally forming between the coil and the protective polyimide film. If these effects are eliminated, uncertainty of Jc originates mainly from (a) uncertainty of the experimental coil coefficient k', which is dominated by uncertainty of the transport Jc, and (b) underestimation of the induced electric field E when using a simple Bean model. For a typical DyBa2Cu3O7 film specimen, the relative combined standard uncertainty in the standard method was evaluated as ˜5%. The effect of the film edge on Jc measurements is also described.

  17. Ground and aircraft-based methane measurements in Siberia: source attribution using tracers and models

    Science.gov (United States)

    Arzoumanian, E.; Paris, J. D.; Pruvost, A.; Peng, S.; Turquety, S.; Berchet, A.; Pison, I.; Helle, J.; Arshinov, M.; Belan, B. D.

    2015-12-01

    Methane (CH4) is the second most important anthropogenic greenhouse gas. It is also naturally emitted by a number of processes, including microbial activity in wetlands, permafrost degradation and wildfires. Our current understanding of the extent and amplitude of its natural sources, as well as the large scale driving factors, remain highly uncertain (Kirschke et al., Nature Geosci., 2013). Furthermore, high latitude regions are large natural sources of CH4 in the atmosphere. Observing boreal/Arctic CH4 variability and understanding its main driving processes using atmospheric measurements and transport model is the task of this work. YAK-AEROSIB atmospheric airborne campaigns (flights in the tropospheric layer up to 9 km connecting the two cities of Novosibirsk and Yakutsk) and continuous measurements at Fonovaya Observatory (60 km west of Tomsk - 56° 25'07"N, 84° 04'27"E) have been performed in order to provide observational data on the composition of Siberian air. The study is focused on 2012, during which a strong heat wave impacted Siberia, leading to the highest mean daily temperature values on record since the beginning of the 20th century. This abnormal drought has led to numerous large forest fires. A chemistry-transport model (CHIMERE), combined with datasets for anthropogenic (EDGAR) emissions and models for wetlands (ORCHIDEE) and wildfires (APIFLAME), is used to determine contributions of CH4 sources in the region. Recent results concerning CH4 fluxes and its atmospheric variability in the Siberian territory derived from a modeled-based analysis will be shown and discussed. This work was funded by CNRS (France), the French Ministry of Foreign Affairs, CEA (France), Presidium of RAS (Program No. 4), Brunch of Geology, Geophysics and Mining Sciences of RAS (Program No. 5), Interdisciplinary integration projects of Siberian Branch of RAS (No. 35, No. 70, No. 131), Russian Foundation for Basic Research (grants No 14-05-00526, 14-05-00590). Kirschke, S

  18. CFCl3 (CFC-11): UV Absorption Spectrum Temperature Dependence Measurements and the Impact on Atmospheric Lifetime Uncertainty

    Science.gov (United States)

    McGillen, M.; Fleming, E. L.; Jackman, C. H.; Burkholder, J. B.

    2013-12-01

    CFCl3 (CFC-11) is both a major ozone-depleting substance and a potent greenhouse gas that is removed primarily via stratospheric UV photolysis. Uncertainty in the temperature dependence of its UV absorption spectrum is a significant contributing factor to the overall uncertainty in its global lifetime and, thus, model calculations of stratospheric ozone recovery and climate change. In this work, the CFC-11 UV absorption spectrum was measured over a range of wavelength (184.95-230 nm) and temperature (216-296 K). We report a spectrum temperature dependence that is less than currently recommended for use in atmospheric models. The impact on its atmospheric lifetime was quantified using the NASA Goddard Space Flight Center 2-D coupled chemistry-radiation-dynamics model and the spectrum parameterization developed in this work. The modeled global annually averaged lifetime was 58.1 × 0.7 years (2σ uncertainty due solely to the spectrum uncertainty). The lifetime is slightly reduced and the uncertainty significantly reduced from that obtained using current UV spectrum recommendations. CFCl 3 (CFC-11) 2-D model results: Left: Global annually averaged loss rate coefficient (local lifetime) and photolysis and reaction contributions (see legend). Middle: Molecular loss rate and uncertainty limits; the slow and fast profiles were calculated using the 2σ uncertainty estimates in the CFC-11 UV absorption spectrum from this work. Right: CFC-11 concentration profile. CFC-11 loss process contribution to the overall local lifetime uncertainty (2σ) calculated using the 2-D model (see text). Left: Results obtained from this work. Right: Results obtained using model input from Sander et al. [2011] and updates in SPARC [2013].

  19. Automatic measurement of compression wood cell attributes in fluorescence microscopy images.

    Science.gov (United States)

    Selig, B; Luengo Hendriks, C L; Bardage, S; Daniel, G; Borgefors, G

    2012-06-01

    This paper presents a new automated method for analyzing compression wood fibers in fluorescence microscopy. Abnormal wood known as compression wood is present in almost every softwood tree harvested. Compression wood fibers show a different cell wall morphology and chemistry compared to normal wood fibers, and their mechanical and physical characteristics are considered detrimental for both construction wood and pulp and paper purposes. Currently there is the need for improved methodologies for characterization of lignin distribution in wood cell walls, such as from compression wood fibers, that will allow for a better understanding of fiber mechanical properties. Traditionally, analysis of fluorescence microscopy images of fiber cross-sections has been done manually, which is time consuming and subjective. Here, we present an automatic method, using digital image analysis, that detects and delineates softwood fibers in fluorescence microscopy images, dividing them into cell lumen, normal and highly lignified areas. It also quantifies the different areas, as well as measures cell wall thickness. The method is evaluated by comparing the automatic with a manual delineation. While the boundaries between the various fiber wall regions are detected using the automatic method with precision similar to inter and intra expert variability, the position of the boundary between lumen and the cell wall has a systematic shift that can be corrected. Our method allows for transverse structural characterization of compression wood fibers, which may allow for improved understanding of the micro-mechanical modeling of wood and pulp fibers.

  20. From digital earth to digital neighbourhood: A study of subjective measures of walkability attributes in objectively assessed digital neighbourhood

    Science.gov (United States)

    Qureshi, S.; Ho, C. S.

    2014-02-01

    According to IEA report (2011), about 23% of the World's CO2 emissions result from transport and this is one of the few areas where emissions are still rapidly increasing. The use of private vehicles is one of the principle contributors to green house gas emissions from transport sector. Therefore this paper focuses on the shift to more sustainable and low carbon forms of transportation mode such as walking. Neighbourhood built environment attributes may influence walkability. For this study, the author used a modified version of the "Neighbourhood Environment Walkability Scale" to make comparison between respondents' perceptions regarding attributes of two neighborhoods of Putrajaya. The 21st Century really needs planners to use the Digital Earth Concept, to go from global to regional to national to very local issues, using integrated, advanced technologies such as earth observation, GIS, virtual reality, etc. For this research, two (2) neighborhoods of different densities (High and Low density) were selected. A sample total of 381(195 and 186) between 7 to 65 years old participants were selected For subjective measures we used 54 questions questionnaire survey where as for the objective measures we used desktop 9.3 version of Arc GIS soft ware. Our results shows that respondents who reside in high-walkable neighbourhood precinct 9 in Putrajaya rated factors such as residential density, land use mix, proximity to destination and street connectivity, consistently higher then did respondents of the low walkable neighbourhood precinct 8 in Putrajaya.

  1. Different collector types for sampling deposition of polycyclic aromatic hydrocarbons--comparison of measurement results and their uncertainty.

    Science.gov (United States)

    Gladtke, Dieter; Bakker, Frits; Biaudet, Hugues; Brennfleck, Alexandra; Coleman, Peter; Creutznacher, Harald; Van Egmond, Ben F; Hafkenscheid, Theo; Hahne, Frank; Houtzager, Marc M; Leoz-Garziandia, Eva; Menichini, Edoardo; Olschewski, Anja; Remesch, Thomas

    2012-08-01

    Different collector types, sample workup procedures and analysis methods to measure the deposition of polycyclic aromatic hydrocarbons (PAH) were tested and compared. Whilst sample workup and analysis methods did not influence the results of PAH deposition measurements, using different collector types changed the measured deposition rates of PAH significantly. The results obtained with a funnel-bottle collector showed the highest deposition rates and a low measurement uncertainty. The deposition rates obtained with the wet-only collectors were the lowest at industrial sites and under dry weather conditions. For the open-jar collectors the measurement uncertainty was high. Only at an industrial site with extremely high PAH deposition rates the results of open-jar collectors were comparable to those obtained with funnel-bottle collectors. Thus, if bulk deposition of PAH has to be measured, funnel-bottle combinations are proved to be the collectors of choice. These collectors were the only ones always fulfilling the requirements of European legislation.

  2. Uncertainty analysis of numerical model simulations and HFR measurements during high energy events

    Science.gov (United States)

    Donncha, Fearghal O.; Ragnoli, Emanuele; Suits, Frank; Updyke, Teresa; Roarty, Hugh

    2013-04-01

    The identification and decomposition of sensor and model shortcomings is a fundamental component of any coastal monitoring and predictive system. In this research, numerical model simulations are combined with high-frequency radar (HFR) measurements to provide insights into the statistical accuracy of the remote sensing unit. A combination of classical tidal analysis and quantitative measures of correlation evaluate the performance of both across the bay. A network of high frequency radars is deployed within the Chesapeake study site, on the East coast of the United States, as a backbone component of the Integrated Ocean Observing System (IOOS). This system provides real-time synoptic measurements of surface currents in the zonal and meridional direction at hourly intervals in areas where at least two stations overlap, and radial components elsewhere. In conjunction with this numerical simulations using EFDC (Environmental Fluid Dynamics Code), an advanced three-dimensional model, provide additional details on flows, encompassing both surface dynamics and volumetric transports, while eliminating certain fundamental error inherent in the HFR system such as geometric dilution of precision (GDOP) and range dependencies. The aim of this research is an uncertainty estimate of both these datasets allowing for a degree of inaccuracy in both. The analysis focuses on comparisons between both the vector and radial component of flows returned by the HFR relative to numerical predictions. The analysis provides insight into the reported accuracy of both the raw radial data and the post-processed vector current data computed from combining the radial data. Of interest is any loss of accuracy due to this post-processing. Linear regression techniques decompose the surface currents based on dominant flow processes (tide and wind); statistical analysis and cross-correlation techniques measure agreement between the processed signal and dominant forcing parameters. The tidal signal

  3. Flow Rates Measurement and Uncertainty Analysis in Multiple-Zone Water-Injection Wells from Fluid Temperature Profiles.

    Science.gov (United States)

    Reges, José E O; Salazar, A O; Maitelli, Carla W S P; Carvalho, Lucas G; Britto, Ursula J B

    2016-07-13

    This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved.

  4. Method for Determining the Coalbed Methane Content with Determination the Uncertainty of Measurements

    Science.gov (United States)

    Szlązak, Nikodem; Korzec, Marek

    2016-06-01

    Methane has a bad influence on safety in underground mines as it is emitted to the air during mining works. Appropriate identification of methane hazard is essential to determining methane hazard prevention methods, ventilation systems and methane drainage systems. Methane hazard is identified while roadways are driven and boreholes are drilled. Coalbed methane content is one of the parameters which is used to assess this threat. This is a requirement according to the Decree of the Minister of Economy dated 28 June 2002 on work safety and hygiene, operation and special firefighting protection in underground mines. For this purpose a new method for determining coalbed methane content in underground coal mines has been developed. This method consists of two stages - collecting samples in a mine and testing the sample in the laboratory. The stage of determining methane content in a coal sample in a laboratory is essential. This article presents the estimation of measurement uncertainty of determining methane content in a coal sample according to this methodology.

  5. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    Directory of Open Access Journals (Sweden)

    S. Pande

    2015-04-01

    Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  6. Advances in quality control for dioxins monitoring and evaluation of measurement uncertainty from quality control data.

    Science.gov (United States)

    Eppe, Gauthier; De Pauw, Edwin

    2009-08-01

    This paper describes an application of multivariate and multilevel quality control charts with the aim of improving the internal quality control (IQC) procedures for the monitoring of dioxins and dioxin-like PCBs analysis in food. Dioxin analysts have to use the toxic equivalent concept (TEQ) to assess the toxicity potential of a mixture of dioxin-like compounds. The TEQ approach requires quantifying individually 29 dioxin-like compounds. Monitoring the congeners separately on univariate QC charts is misleading owing to the increase of false alarm rate. We propose to subdivide the TEQ value into 3 sub-groups and to control simultaneously the 3 variables in a T(2) chart. When a T(2) exceeds the upper control limit, it acts as a warning to trigger additional investigations on individual congeners. We discuss the minimum number of runs required to reliably estimate the QC chart parameters and we suggest using data from multilevel QC charts to properly characterize the standard deviations and the correlation coefficients. Moreover, the univariate QC chart can be sensitised to detect systematic errors by using exponentially weighted moving average (EWMA) technique. The EWMA chart provides an additional guidance on setting appropriate criteria to control the method bias and to support trend analysis. Finally, we present an estimate of measurement uncertainty by computing the accuracy profile in a retrospective way with the QC data generated and we discuss assessment of compliance with regulatory maximum levels.

  7. Potential for improved radiation thermometry measurement uncertainty through implementing a primary scale in an industrial laboratory

    Science.gov (United States)

    Willmott, Jon R.; Lowe, David; Broughton, Mick; White, Ben S.; Machin, Graham

    2016-09-01

    A primary temperature scale requires realising a unit in terms of its definition. For high temperature radiation thermometry in terms of the International Temperature Scale of 1990 this means extrapolating from the signal measured at the freezing temperature of gold, silver or copper using Planck’s radiation law. The difficulty in doing this means that primary scales above 1000 °C require specialist equipment and careful characterisation in order to achieve the extrapolation with sufficient accuracy. As such, maintenance of the scale at high temperatures is usually only practicable for National Metrology Institutes, and calibration laboratories have to rely on a scale calibrated against transfer standards. At lower temperatures it is practicable for an industrial calibration laboratory to have its own primary temperature scale, which reduces the number of steps between the primary scale and end user. Proposed changes to the SI that will introduce internationally accepted high temperature reference standards might make it practicable to have a primary high temperature scale in a calibration laboratory. In this study such a scale was established by calibrating radiation thermometers directly to high temperature reference standards. The possible reduction in uncertainty to an end user as a result of the reduced calibration chain was evaluated.

  8. Attribution of climate change, vegetation restoration, and engineering measures to the reduction of suspended sediment in the Kejie catchment, southwest China

    Science.gov (United States)

    Ma, X.; Lu, X. X.; van Noordwijk, M.; Li, J. T.; Xu, J. C.

    2014-05-01

    Suspended sediment transport in rivers is controlled by terrain, climate, and human activities. These variables affect hillslope and riverbank erosion at the source, transport velocities and sedimentation opportunities in the river channel, and trapping in reservoirs. The relative importance of those factors varies by context, but the specific attribution to sediment transfer is important for policymaking, and has wide implications on watershed management. In our research, we analyzed data from the Kejie watershed in the upper Salween River (Yunnan Province, China), where a combination of land cover change (reforestation, as well as soil and water conservation measures) and river channel engineering (sand mining and check dam construction) interact with a changing climate. Records (1971-2010) of river flow and suspended sediment loads were combined with five land-use maps from 1974, 1991, 2001, 2006 and 2009. Average annual sediment yield decreased from 13.7 t ha-1 yr-1 to 8.3 t ha-1 yr-1 between the period 1971-1985 and the period 1986-2010. A distributed hydrological model (Soil and Water Assessment Tools, SWAT) was set up to simulate the sediment sourcing and transport process. By recombining land-use and climate data for the two periods in model scenarios, the contribution of these two factors could be assessed with engineering effects derived from residual measured minus modeled transport. Overall, we found that 47.8% of the decrease was due to land-use and land cover change, 19.8% to climate change, resulting in a milder rainfall regime, 26.1% to watershed engineering measures, and the remaining 6.3% was due to the simulation percent bias. Moreover, mean annual suspended sediment yield decreased drastically with the increase of forest cover, making diverse forest cover one of the most effective ecosystems to control erosion. For consideration of stakeholders and policymakers, we also discuss at length the modeling uncertainty and implications for future soil

  9. A computer simulation platform for the estimation of measurement uncertainties in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Hiller, Jochen; Reindl, Leonard M

    2012-01-01

    into account the main error sources for the measurement. This method has the potential to deal with all kinds of systematic and random errors that influence a dimensional CT measurement. A case study demonstrates the practical application of the VCT simulator using numerically generated CT data and statistical......The knowledge of measurement uncertainty is of great importance in conformance testing in production. The tolerance limit for production must be reduced by the amounts of measurement uncertainty to ensure that the parts are in fact within the tolerance. Over the last 5 years, industrial X......-ray computed tomography (CT) has become an important technology for dimensional quality control. In this paper a computer simulation platform is presented which is able to investigate error sources in dimensional CT measurements. The typical workflow in industrial CT metrology is described and methods...

  10. The influence of uncertainties of measurements in laboratory performance evaluation using an intercomparison program of radionuclide assays in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Tauhata, Luiz [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil)]. E-mail: tauhata@ird.gov.br; Elizabeth Couto Machado Vianna, Maria [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil); Eduardo de Oliveira, Antonio [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil); Cristina de Melo Ferreira, Ana [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil); Julia Camara da Silva Braganca, Maura [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil); Faria Clain, Almir [Instituto de Radioprotecao e Dosimetria, Laboratorio Nacional de Metrologia das Radiacoes Ionizantes, LNMRI/IRD/CNEN-Avenida Salvador Allende s/n, Recreio, CEP:22780-160, Rio de Janeiro, RJ (Brazil)

    2006-10-15

    To show the influence of measurement uncertainties in performance evaluation of laboratories, data from 42 comparison runs were evaluated using two statistical criteria. The normalized standard deviation, D, used by US EPA, that mainly takes into account the accuracy, and the normalized deviation, E, that includes the individual laboratory uncertainty used for performance evaluation in the key-comparisons by BIPM. The results show that data evaluated by the different criteria give a significant deviation of laboratory performance in each radionuclide assay when we analyse a large quantity of data.

  11. Uncertainty Evaluation of Measurement Performance of Measuring Instruments%计量器具的测量结果不确定度评定分析

    Institute of Scientific and Technical Information of China (English)

    孙宇恒

    2013-01-01

    The uncertainty of measurement performance of a certain measuring instruments needs to combine the used measurement standard and method. The result is represented by the expanded uncertainty of relative value Urel including k=2. In general, for calibration, the uncertainty of measurement performance of a certain measuring instrument should be less than 1/3 of error limit; for finalization calibration, the uncertainty should be less than l/5 of error limit. The verification of measurement standard uncertainty of measurement refers to the rationality verification of uncertainty.%测量结果的不确定度分析某一计量器具示值误差的测量结果不确定度分析要结合所使用的计量标准装置的型式、所采用的方法等进行,结果一般用包含k=2的相对值的扩展不确定度Urel表示。一般情况下,某一计量器具的测量结果的不确定度,对于检定,应小于计量器具误差限的1/3;对于定型检定,应小于计量器具误差限的l/5。计量标准的测量不确定度验证,该栏是指对所给的不确定度合理性的检验。

  12. Measurement uncertainty for the determination of amphetamines in urine by liquid-phase microextraction and gas chromatography-mass spectrometry.

    Science.gov (United States)

    Franco de Oliveira, Sarah Carobini Werner de Souza Eller; Yonamine, Mauricio

    2016-08-01

    A gas chromatography-mass spectrometry method for the determination of amphetamines in urine samples by means of liquid-phase microextraction was validated, including calculation of measurement uncertainty. After extraction in the three-phase mode, acceptor phase was withdrawn from the fiber and the residue was derivatized with trifluoroacetic anhydride. The method showed to be very simple, rapid and it required a significantly low amount of organic solvent for extraction. The limits of detection were 10 and 20μg/L for amphetamine and methamphetamine, respectively. The calibration curves were linear over the specified range (20μg/L to 1400μg/L; r(2)>0.99). The method showed to be both precise and accurate and a relative combined uncertainty of 2% was calculated. In order of importance, the factors which were more determinant for the calculation of method uncertainty were: analyte concentration, sample volume, trueness and method precision.

  13. Measurement of Uncertainty for Aqueous Ethanol Wet-Bath Simulator Solutions Used with Evidential Breath Testing Instruments.

    Science.gov (United States)

    Hwang, Rong-Jen; Beltran, Jada; Rogers, Craig; Barlow, Jeremy; Razatos, Gerasimos

    2016-09-01

    Aqueous ethanol wet-bath simulator solutions are used to perform calibration adjustments, calibration checks, proficiency testing, and inspection of breath alcohol instruments. The Toxicology Bureau of the New Mexico Department of Health has conducted a study to estimate a measurement of uncertainty for the preparation and testing of these wet-bath simulator solutions. The measurand is identified as the mass concentration of ethanol (g/100 mL) determined through dual capillary column headspace gas chromatography with flame ionization detector analysis. Three groups were used in the estimation of the aqueous ethanol wet-bath simulator solutions uncertainty: GC calibration adjustment, GC analytical, and certified reference material. The standard uncertainties for these uncertainty sources were combined using the method of root-sum-squares to give uc = 0.8598%. The combined standard uncertainty was expanded to U = 1.7% to reflect a confidence level of 95% using a coverage factor of 2. This estimation applies to all aqueous ethanol wet-bath simulator solution concentrations produced by this laboratory.

  14. Towards a standardized processing of Net Ecosystem Exchange measured with eddy covariance technique: algorithms and uncertainty estimation

    Directory of Open Access Journals (Sweden)

    D. Papale

    2006-01-01

    Full Text Available Eddy covariance technique to measure CO2, water and energy fluxes between biosphere and atmosphere is widely spread and used in various regional networks. Currently more than 250 eddy covariance sites are active around the world measuring carbon exchange at high temporal resolution for different biomes and climatic conditions. In this paper a new standardized set of corrections is introduced and the uncertainties associated with these corrections are assessed for eight different forest sites in Europe with a total of 12 yearly datasets. The uncertainties introduced on the two components GPP (Gross Primary Production and TER (Terrestrial Ecosystem Respiration are also discussed and a quantitative analysis presented. Through a factorial analysis we find that generally, uncertainties by different corrections are additive without interactions and that the heuristic u*-correction introduces the largest uncertainty. The results show that a standardized data processing is needed for an effective comparison across biomes and for underpinning inter-annual variability. The methodology presented in this paper has also been integrated in the European database of the eddy covariance measurements.

  15. Quantifying Urban Natural Gas Leaks from Street-level Methane Mapping: Measurements and Uncertainty

    Science.gov (United States)

    von Fischer, J. C.; Ham, J. M.; Griebenow, C.; Schumacher, R. S.; Salo, J.

    2013-12-01

    Leaks from the natural gas pipeline system are a significant source of anthropogenic methane in urban settings. Detecting and repairing these leaks will reduce the energy and carbon footprints of our cities. Gas leaks can be detected from spikes in street-level methane concentrations measured by analyzers deployed on vehicles. While a spike in methane concentration indicates a leak, an algorithm (e.g., inverse model) must be used to estimate the size of the leak (i.e., flux) from concentration data and supporting meteorological information. Unfortunately, this drive-by approach to leak quantification is confounded by the complexity of urban roughness, changing weather conditions, and other incidental factors (e.g., traffic, vehicle speed, etc.). Furthermore, the vehicle might only pass through the plume one to three times during routine mapping. The objective of this study was to conduct controlled release experiments to better quantify the relationship between mobile methane concentration measurements and the size and location of the emission source (e.g., pipeline leakage) in an urban environment. A portable system was developed that could release methane at known rates between 10 and 40 LPM while maintaining concentrations below the lower explosive limit. A mapping vehicle was configured with fast response methane analyzers, GPS, and meteorological instruments. Portable air-sampling tripods were fabricated that could be deployed at defined distances downwind from the release point and automatically-triggered to collect grab samples. The experimental protocol was as follows: (1) identify an appropriate release point within a city, (2) release methane at a known rate, (3) measure downwind street-level concentrations with the vehicle by making multiple passes through the plume, and (4) collect supporting concentration and meteorological data with the static tripod samplers deployed in the plume. Controlled release studies were performed at multiple locations and

  16. On Uncertainty of Measurement for Electrocardiographs%心电图机测量不确定度的探讨

    Institute of Scientific and Technical Information of China (English)

    刘兰芳; 冯振清; 常子栋

    2015-01-01

    In this paper,measurement uncertainty of electrocardiographs was evaluated according to requirements of JJG 543-2008 Electrocardiograph through a case study of ECG-6511 electrocardiographs. The paper discusses the method of measurement uncertainty appraisal during electrocardiograph verification from the aspects of direct waveform length meas-urement and comparison measurement through adjusting waveform length to that of a referring waveform. As the results indicate,the calibration method can meet the requirement of traceability of value quantity of ECG machine,as the meas-urement uncertainty of the measured results can meet the requirements of the verification regulation.%依照 JJG 543-2008《心电图机》的要求,以 ECG-6511型心电图机为例,从直接测量波形长度、长度调整到参考波形进行比较测量等方面探讨了心电图机检定时的测量不确定评定方法。结果表明,日常检定采用的方法得到的测量结果的不确定度可满足检定规程的要求,检定结果可满足心电图机量值溯源的要求。

  17. The effect of random and systematic measurement uncertainties on temporal and spatial upscaling of N2O fluxes

    Science.gov (United States)

    Cowan, Nicholas; Levy, Peter; Skiba, Ute

    2016-04-01

    The addition of reactive nitrogen to agricultural soils in the form of artificial fertilisers or animal waste is the largest global source of anthropogenic N2O emissions. Emission factors are commonly used to evaluate N2O emissions released after the application of nitrogen fertilisers on a global scale based on records of fertiliser use. Currently these emission factors are estimated primarily by a combination of results of experiments in which flux chamber methodology is used to estimate annual cumulative fluxes of N2O after nitrogen fertiliser applications on agricultural soils. The use of the eddy covariance method to measure N2O and estimate emission factors is also becoming more common in the flux community as modern rapid gas analyser instruments advance. The aim of the presentation is to highlight the weaknesses and potential systematic biases in current flux measurement methodology. This is important for GHG accounting and for accurate model calibration and verification. The growing interest in top-down / bottom-up comparisons of tall tower and conventional N2O flux measurements is also an area of research in which the uncertainties in flux measurements needs to be properly quantified. The large and unpredictable spatial and temporal variability of N2O fluxes from agricultural soils leads to a significant source of uncertainty in emission factor estimates. N2O flux measurements typically show poor relationships with explanatory co-variates. The true uncertainties in flux measurements at the plot scale are often difficult to propagate to field scale and the annual time scale. This results in very uncertain cumulative flux (emission factor) estimates. Cumulative fluxes estimated using flux chamber and eddy covariance methods can also differ significantly which complicates the matter further. In this presentation, we examine some effects that spatial and temporal variability of N2O fluxes can have on the estimation of emission factors and describe how

  18. Sensitivity of Displaced-Beam Scintillometer Measurements of Area-Average Heat Fluxes to Uncertainties in Topographic Heights

    CERN Document Server

    Gruber, Matthew; Hartogensis, Oscar

    2014-01-01

    Displaced-beam scintillometer measurements of the turbulence inner-scale length $l_o$ and refractive index structure function $C_n^2$ resolve area-average turbulent fluxes of heat and momentum through the Monin-Obukhov similarity equations. Sensitivity studies have been produced for the use of displaced-beam scintillometers over flat terrain. Many real field sites feature variable topography. We develop here an analysis of the sensitivity of displaced-beam scintillometer derived sensible heat fluxes to uncertainties in spacially distributed topographic measurements. Sensitivity is shown to be concentrated in areas near the center of the beam and where the underlying topography is closest to the beam height. Uncertainty may be decreased by taking precise topographic measurements in these areas.

  19. Analysis of Uncertainties in Protection Heater Delay Time Measurements and Simulations in Nb$_{3}$Sn High-Field Accelerator Magnets

    CERN Document Server

    Salmi, Tiina; Marchevsky, Maxim; Bajas, Hugo; Felice, Helene; Stenvall, Antti

    2015-01-01

    The quench protection of superconducting high-field accelerator magnets is presently based on protection heaters, which are activated upon quench detection to accelerate the quench propagation within the winding. Estimations of the heater delay to initiate a normal zone in the coil are essential for the protection design. During the development of Nb3Sn magnets for the LHC luminosity upgrade, protection heater delays have been measured in several experiments, and a new computational tool CoHDA (Code for Heater Delay Analysis) has been developed for heater design. Several computational quench analyses suggest that the efficiency of the present heater technology is on the borderline of protecting the magnets. Quantifying the inevitable uncertainties related to the measured and simulated delays is therefore of pivotal importance. In this paper, we analyze the uncertainties in the heater delay measurements and simulations using data from five impregnated high-field Nb3Sn magnets with different heater geometries. ...

  20. Sensitivity of large-aperture scintillometer measurements of area-average heat fluxes to uncertainties in topographic heights

    Directory of Open Access Journals (Sweden)

    M. A. Gruber

    2014-01-01

    Full Text Available Scintillometer measurements allow for estimations of the refractive index structure parameter Cn2 over large areas in the atmospheric surface layer. Turbulent fluxes of heat and momentum are inferred through coupled sets of equations derived from the Monin–Obukhov similarity hypothesis. One-dimensional sensitivity functions have been produced that relate the sensitivity of heat fluxes to uncertainties in single values of beam height over homogeneous and flat terrain. However, real field sites include variable topography and heterogeneous surfaces. We develop here the first analysis of the sensitivity of scintillometer derived sensible heat fluxes to uncertainties in spatially distributed topographic measurements. For large-aperture scintillometers and independent friction velocity u* measurements, sensitivity is shown to be concentrated in areas near the center of the beam path and where the underlying topography is closest to the beam height. Uncertainty may be greatly reduced by focusing precise topographic measurements in these areas. A new two-dimensional variable terrain sensitivity function is developed for quantitative error analysis. This function is compared with the previous one-dimensional sensitivity function for the same measurement strategy over flat and homogeneous terrain. Additionally, a new method of solution to the set of coupled equations is produced that eliminates computational error. The results are produced using a new methodology for error analysis involving distributed parameters that may be applied in other disciplines.

  1. Attribution and evolution of ozone from Asian wild fires using satellite and aircraft measurements during the ARCTAS campaign

    Directory of Open Access Journals (Sweden)

    R. Dupont

    2012-01-01

    Full Text Available We use ozone and carbon monoxide measurements from the Tropospheric Emission Spectrometer (TES, model estimates of Ozone, CO, and ozone pre-cursors from the Real-time Air Quality Modeling System (RAQMS, and data from the NASA DC8 aircraft to characterize the source and dynamical evolution of ozone and CO in Asian wildfire plumes during the spring ARCTAS campaign 2008. On the 19 April, NASA DC8 O3 and aerosol Differential Absorption Lidar (DIAL observed two biomass burning plumes originating from North-Western Asia (Kazakhstan and South-Eastern Asia (Thailand that advected eastward over the Pacific reaching North America in 10 to 12 days. Using both TES observations and RAQMS chemical analyses, we track the wildfire plumes from their source to the ARCTAS DC8 platform. In addition to photochemical production due to ozone pre-cursors, we find that exchange between the stratosphere and the troposphere is a major factor influencing O3 concentrations for both plumes. For example, the Kazakhstan and Siberian plumes at 55 degrees North is a region of significant springtime stratospheric/tropospheric exchange. Stratospheric air influences the Thailand plume after it is lofted to high altitudes via the Himalayas. Using comparisons of the model to the aircraft and satellite measurements, we estimate that the Kazakhstan plume is responsible for increases of O3 and CO mixing ratios by approximately 6.4 ppbv and 38 ppbv in the lower troposphere (height of 2 to 6 km, and the Thailand plume is responsible for increases of O3 and CO mixing ratios of approximately 11 ppbv and 71 ppbv in the upper troposphere (height of 8 to 12 km respectively. However, there are significant sources of uncertainty in these estimates that point to the need for future improvements in both model and satellite observations. For example, it is challenging to characterize the fraction of air parcels from the stratosphere versus those from the

  2. Measuring reliability under epistemic uncertainty:Review on non-probabilistic reliability metrics

    Institute of Scientific and Technical Information of China (English)

    Kang Rui; Zhang Qingyuan; Zeng Zhiguo; Enrico Zio; Li Xiaoyang

    2016-01-01

    In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability) and uncertainty-theory-based reliability metrics (belief reliability). It is pointed out that a qualified reli-ability metric that is able to consider the effect of epistemic uncertainty needs to (1) compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2) satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  3. Model-based Type B uncertainty evaluations of measurement towards more objective evaluation strategies

    NARCIS (Netherlands)

    M. Boumans

    2013-01-01

    This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation.

  4. Measures of Model Uncertainty in the Assessment of Primary Stresses in Ship Structures

    DEFF Research Database (Denmark)

    Östergaard, Carsten; Dogliani, Mario; Guedes Soares, Carlos;

    1996-01-01

    The paper considers various models and methods commonly used for linear elastic stress analysis and assesses the uncertainty involved in their application to the analysis of the distribution of primary stresses in the hull of a containership example, through statistical evaluations of the results...

  5. An Evaluation of Test and Physical Uncertainty of Measuring Vibration in Wooden Junctions

    DEFF Research Database (Denmark)

    Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard

    2012-01-01

    In the present paper a study of test and material uncertainty in modal analysis of certain wooden junctions is presented. The main structure considered here is a T-junction made from a particleboard plate connected to a spruce beam of rectangular cross section. The size of the plate is 1.2 m by 0...

  6. The uncertainties calculation of acoustic method for measurement of dissipative properties of heterogeneous non-metallic materials

    Directory of Open Access Journals (Sweden)

    Мaryna O. Golofeyeva

    2015-12-01

    Full Text Available The effective use of heterogeneous non-metallic materials and structures needs measurement of reliable values of dissipation characteristics, as well as common factors of their change during the loading process. Aim: The aim of this study is to prepare the budget for measurement uncertainty of dissipative properties of composite materials. Materials and Methods: The method used to study the vibrational energy dissipation characteristics based on coupling of vibrations damping decrement and acoustic velocity in a non-metallic heterogeneous material is reviewed. The proposed method allows finding the dependence of damping on vibrations amplitude and frequency of strain-stress state of material. Results: Research of the accuracy of measurement method during the definition of decrement attenuation of fluctuations in synthegran was performed. The international approach for evaluation of measurements quality is used. It includes the common practice international rules for uncertainty expression and their summation. These rules are used as internationally acknowledged confidence measure to the measurement results, which includes testing. The uncertainties budgeting of acoustic method for measurement of dissipative properties of materials were compiled. Conclusions: It was defined that there are two groups of reasons resulting in errors during measurement of materials dissipative properties. The first group of errors contains of parameters changing of calibrated bump in tolerance limits, displacement of sensor in repeated placement to measurement point, layer thickness variation of contact agent because of irregular hold-down of resolvers to control surface, inaccuracy in reading and etc. The second group of errors is linked with density and Poisson’s ratio measurement errors, distance between sensors, time difference between signals of vibroacoustic sensors.

  7. Influence of measurement uncertainties on fractional solubility of iron in mineral aerosols over the oceans

    Science.gov (United States)

    Meskhidze, Nicholas; Johnson, Matthew S.; Hurley, David; Dawson, Kyle

    2016-09-01

    The atmospheric supply of mineral dust iron (Fe) plays a crucial role in the Earth's biogeochemical cycle and is of specific importance as a micronutrient in the marine environment. Observations show several orders of magnitude variability in the fractional solubility of Fe in mineral dust aerosols, making it hard to assess the role of mineral dust in the global ocean biogeochemical Fe cycle. In this study we compare the operational solubility of mineral dust aerosol Fe associated with the flow-through leaching protocol to the results of the global 3-D chemical transport model GEOS-Chem. According to the protocol, aerosol Fe is defined as soluble by first deionized water leaching of mineral dust through a 0.45 μm pore size membrane followed by acidification and storage of the leachate over a long period of time prior to analysis. To estimate the uncertainty in soluble Fe results introduced by the flow-through leaching protocol, we prescribe an average 50% (range of 30-70%) fractional solubility to sub-0.45 μm sized mineral dust particles that may inadvertently pass the filter and end up in the acidified (at pH ∼ 1.7) leachate for a couple of month period. In the model, the fractional solubility of Fe is either explicitly calculated using a complex mineral aerosol Fe dissolution equations, or prescribed to be 1% and 4% often used by global ocean biogeochemical Fe cycle models to reproduce the broad characteristics of the presently observed ocean dissolved iron distribution. Calculations show that the fractional solubility of Fe derived through the flow-through leaching is higher compared to the model results. The largest differences (∼40%) are predicted to occur farther away from the dust source regions, over the areas where sub-0.45 μm sized mineral dust particles contribute a larger fraction of the total mineral dust mass. This study suggests that different methods used in soluble Fe measurements and inconsistences in the operational definition of

  8. Trends of solar ultraviolet irradiance at Barrow, Alaska, and the effect of measurement uncertainties on trend detection

    Directory of Open Access Journals (Sweden)

    G. Bernhard

    2011-12-01

    Full Text Available Spectral ultraviolet (UV irradiance has been observed near Barrow, Alaska (71° N, 157° W between 1991 and 2011 with an SUV-100 spectroradiometer. The instrument was historically part of the US National Science Foundation's UV Monitoring Network and is now a component of NSF's Arctic Observing Network. From these measurements, trends in monthly average irradiance and their uncertainties were calculated. The analysis focuses on two quantities, the UV Index (which is affected by atmospheric ozone concentrations and irradiance at 345 nm (which is virtually insensitive to ozone. Uncertainties of trend estimates depend on variations in the data due to (1 natural variability, (2 systematic and random errors of the measurements, and (3 uncertainties caused by gaps in the time series. Using radiative transfer model calculations, systematic errors of the measurements were detected and corrected. Different correction schemes were tested to quantify the sensitivity of the trend estimates on the treatment of systematic errors. Depending on the correction method, estimates of decadal trends changed between 1.5% and 2.9%. Uncertainties in the trend estimates caused by error sources (2 and (3 were set into relation with the overall uncertainty of the trend determinations. Results show that these error sources are only relevant for February, March, and April when natural variability is low due to high surface albedo. This method of addressing measurement uncertainties in time series analysis is also applicable to other geophysical parameters. Trend estimates varied between −14% and +5% per decade and were significant (95.45% confidence level only for the month of October. Depending on the correction method, October trends varied between −11.4% and −13.7% for irradiance at 345 nm and between −11.7% and −14.1% for the UV Index. These large trends are consistent with trends in short-wave (0.3–3.0 μm solar irradiance measured with pyranometers at NOAA

  9. Measurement of air-refractive-index fluctuation from laser frequency shift with uncertainty of order 10-9

    Science.gov (United States)

    Banh Quoc, Tuan; Ishige, Masashi; Ohkubo, Yuria; Aketagawa, Masato

    2009-12-01

    In the previous work (Ishige et al 2009 Meas. Sci. Technol. 20 084019), we presented a method of measuring the relative air-refractive-index fluctuation (Δnair) from the laser frequency shift with the measurement uncertainty of order 10-8 using a phase modulation homodyne interferometer (Basile et al 1991 Metrologia 28 455), which was supported by an ultralow thermal expansion material (ULTEM) and an external cavity laser diode (ECLD). In this paper, an improvement in the uncertainty of the Δnair measurement is presented. The improvement method is based on a Fabry-Perot cavity constructed on the ULTEM, which has a thermal expansion coefficient of 2 × 10-8 K-1 and an ECLD. The Pound-Drever-Hall method (Drever et al 1983 Appl. Phys. B 31 97) is also used to control the ECLD frequency to track the resonance of the cavity. Δnair can be derived from the ECLD frequency shift. The estimated measurement uncertainty of Δnair for a short time (~150 s) in the experiment is of order 2.5 × 10-9 or less.

  10. New Uncertainty Measure of Rough Fuzzy Sets and Entropy Weight Method for Fuzzy-Target Decision-Making Tables

    Directory of Open Access Journals (Sweden)

    Huani Qin

    2014-01-01

    Full Text Available In the rough fuzzy set theory, the rough degree is used to characterize the uncertainty of a fuzzy set, and the rough entropy of a knowledge is used to depict the roughness of a rough classification. Both of them are effective, but they are not accurate enough. In this paper, we propose a new rough entropy of a rough fuzzy set combining the rough degree with the rough entropy of a knowledge. Theoretical studies and examples show that the new rough entropy of a rough fuzzy set is suitable. As an application, we introduce it into a fuzzy-target decision-making table and establish a new method for evaluating the entropy weight of attributes.

  11. Size measurement uncertainties of near-monodisperse, near-spherical nanoparticles using transmission electron microscopy and particle-tracking analysis

    Science.gov (United States)

    De Temmerman, Pieter-Jan; Verleysen, Eveline; Lammertyn, Jeroen; Mast, Jan

    2014-10-01

    Particle-tracking analysis (PTA) in combination with systematic imaging, automatic image analysis, and automatic data processing is validated for size measurements. Transmission electron microscopy (TEM) in combination with a systematic selection procedure for unbiased random image collection, semiautomatic image analysis, and data processing is validated for size, shape, and surface topology measurements. PTA is investigated as an alternative for TEM for the determination of the particle size in the framework of the EC definition of nanomaterial. The intra-laboratory validation study assessing the precision and accuracy of the TEM and PTA methods consists of series of measurements on three gold reference materials with mean area-equivalent circular diameters of 8.9 nm (RM-8011), 27.6 nm (RM-8012), and 56.0 nm (RM-8013), and two polystyrene materials with modal hydrodynamic diameters of 102 nm (P1) and 202 nm (H1). By obtaining a high level of automation, PTA proves to give precise and non-biased results for the modal hydrodynamic diameter in size range between 30 and 200 nm, and TEM proves to give precise and non-biased results for the mean area-equivalent circular diameter in the size range between 8 and 200 nm of the investigated near-monomodal near-spherical materials. The expanded uncertainties of PTA are about 9 % and are determined mainly by the repeatability uncertainty. This uncertainty is two times higher than the expanded uncertainty of 4 % obtained by TEM for analyses on identical materials. For the investigated near-monomodal and near-spherical materials, PTA can be used as an alternative to TEM for measuring the particle size, with exception of 8.9 nm gold, because this material has a size below the detection limit of PTA.

  12. The Huge Reduction in Adult Male Mortality in Belarus and Russia: Is It Attributable to Anti-Alcohol Measures?

    Directory of Open Access Journals (Sweden)

    Pavel Grigoriev

    Full Text Available Harmful alcohol consumption has long been recognized as being the major determinant of male premature mortality in the European countries of the former USSR. Our focus here is on Belarus and Russia, two Slavic countries which continue to suffer enormously from the burden of the harmful consumption of alcohol. However, after a long period of deterioration, mortality trends in these countries have been improving over the past decade. We aim to investigate to what extent the recent declines in adult mortality in Belarus and Russia are attributable to the anti-alcohol measures introduced in these two countries in the 2000s.We rely on the detailed cause-specific mortality series for the period 1980-2013. Our analysis focuses on the male population, and considers only a limited number of causes of death which we label as being alcohol-related: accidental poisoning by alcohol, liver cirrhosis, ischemic heart diseases, stroke, transportation accidents, and other external causes. For each of these causes we computed age-standardized death rates. The life table decomposition method was used to determine the age groups and the causes of death responsible for changes in life expectancy over time.Our results do not lead us to conclude that the schedule of anti-alcohol measures corresponds to the schedule of mortality changes. The continuous reduction in adult male mortality seen in Belarus and Russia cannot be fully explained by the anti-alcohol policies implemented in these countries, although these policies likely contributed to the large mortality reductions observed in Belarus and Russia in 2005-2006 and in Belarus in 2012. Thus, the effects of these policies appear to have been modest. We argue that the anti-alcohol measures implemented in Belarus and Russia simply coincided with fluctuations in alcohol-related mortality which originated in the past. If these trends had not been underway already, these huge mortality effects would not have occurred.

  13. 测最不确定度的评定中的蒙特卡罗方法%Uncertainty Evaluation in Measurement of Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    陈雅

    2012-01-01

    该文介绍了蒙特卡罗法以及不确定度问题,当采用不确定度传递律进行测量不确定度评定(GUM方法)有困难或不方便时,蒙特卡罗法是实用的替代方法。%The Monte Carlo method and the question of measurement uncertainty are given ,When it is difficult to apply the GUM uncertainty framework that uses the law of propagation of uncertainty to evaluate uncertainty in measurement, the Monte Carlo Method(MCM)is a practical alternative.

  14. Uncertainty evaluation of fluid dynamic models and validation by gamma ray transmission measurements of the catalyst flow in a FCC cold pilot unity

    Energy Technology Data Exchange (ETDEWEB)

    Teles, Francisco A.S.; Santos, Ebenezer F.; Dantas, Carlos C., E-mail: francisco.teles@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociencias. Departamento de Energia Nuclear; Melo, Silvio B., E-mail: sbm@cin.ufpe.br [Universidade Federal de Pernambuco (CIN/UFPE), Recife, PE (Brazil). Centro de Informatica; Santos, Valdemir A. dos, E-mail: vas@unicap.br [Universidade Catolica de Pernambuco (UNICAP), Recife, PE (Brazil). Dept. de Quimica; Lima, Emerson A.O., E-mail: emathematics@gmail.com [Universidade de Pernambuco (POLI/UPE), Recife, PE (Brazil). Escola Politecnica

    2013-07-01

    In this paper, fluid dynamics of Fluid Catalytic Cracking (FCC) process is investigated by means of a Cold Flow Pilot Unit (CFPU) constructed in Plexiglas to visualize operational conditions. Axial and radial catalyst profiles were measured by gamma ray transmission in the riser of the CFPU. Standard uncertainty was evaluated in volumetric solid fraction measurements for several concentrations at a given point of axial profile. Monitoring of the pressure drop in riser shows a good agreement with measured standard uncertainty data. A further evaluation of the combined uncertainty was applied to volumetric solid fraction equation using gamma transmission data. Limit condition of catalyst concentration in riser was defined and simulation with random numbers provided by MATLAB software has tested uncertainty evaluation. The Guide to the expression of Uncertainty in Measurement (GUM) is based on the law of propagation of uncertainty and on the characterization of the quantities measured by means of either a Gaussian distribution or a t-distribution, which allows measurement uncertainty to be delimited by means of a confidence interval. A variety of supplements to GUM are being developed, which will progressively enter into effect. The first of these supplements [3] describes an alternative procedure for the calculation of uncertainties: the Monte Carlo Method (MCM).MCM is an alternative to GUM, since it performs a characterization of the quantities measured based on the random sampling of the probability distribution functions. This paper also explains the basic implementation of the MCM method in MATLAB. (author)

  15. Uncertainties in façade fire tests – measurements and modeling

    Directory of Open Access Journals (Sweden)

    Anderson Johan

    2016-01-01

    Full Text Available In this paper a comparison between test and modelling results are performed for two large-scale façade fire testing methods, namely SP Fire 105 and BS 8414-1. In order to be able to compare tests and modelling the uncertainties have to be quantified both in the test and the modelling. Here we present a methodology based on deterministic sampling to quantify uncertainties in the modelling input. We find, in general good agreement between the models and the test results. Moreover, temperatures estimated by plate thermometers is indicated to be less sensitive to small variations in model input and is thus suitable for these kind of comparisons.

  16. Uncertainty analysis and flow measurements in an experimental mock-up of a molten salt reactor concept

    Energy Technology Data Exchange (ETDEWEB)

    Yamaji, Bogdan; Aszodi, Attila [Budapest University of Technology and Economics (Hungary). Inst. of Nuclear Techniques

    2016-09-15

    In the paper measurement results from the experimental modelling of a molten salt reactor concept will be presented along with detailed uncertainty analysis of the experimental system. Non-intrusive flow measurements are carried out on the scaled and segmented mock-up of a homogeneous, single region molten salt fast reactor concept. Uncertainty assessment of the particle image velocimetry (PIV) measurement system applied with the scaled and segmented model is presented in detail. The analysis covers the error sources of the measurement system (laser, recording camera, etc.) and the specific conditions (de-warping of measurement planes) originating in the geometry of the investigated domain. Effect of sample size in the ensemble averaged PIV measurements is discussed as well. An additional two-loop-operation mode is also presented and the analysis of the measurement results confirm that without enhancement nominal and other operation conditions will lead to strong unfavourable separation in the core flow. It implies that use of internal flow distribution structures will be necessary for the optimisation of the core coolant flow. Preliminary CFD calculations are presented to help the design of a perforated plate located above the inlet region. The purpose of the perforated plate is to reduce recirculation near the cylindrical wall and enhance the uniformity of the core flow distribution.

  17. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Science.gov (United States)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3

  18. On the evaluation of a fuel assembly design by means of uncertainty and sensitivity measures

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Sanchez Espinoza, Victor Hugo [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany). Inst. for Neutron Physics and Reactor Technology

    2012-11-15

    This paper will provide results of an uncertainty and sensitivity study in order to calculate parameters of safety related importance like the fuel centerline temperature, the cladding temperature and the fuel assembly pressure drop of a lead-alloy cooled fast system. Applying best practice guidelines, a list of uncertain parameters has been identified. The considered parameter variations are based on the experience gained during fabrication and operation of former and existing liquid metal cooled fast systems as well as on experimental results and on engineering judgment. (orig.)

  19. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Directory of Open Access Journals (Sweden)

    B. Sarangi

    2015-12-01

    Full Text Available In this work, we have used scanning mobility particle sizer (SMPS and quartz crystal microbalance (QCM to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS, ammonium nitrate (AN and sodium chloride (SC, and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA, where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff with uncertainty of inorganic salt particles (for particle count mean diameter (CMD over a size range 10 to 478 nm, i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm−3, which are comparable with the material density (ρ values, 1.77, 2.17 and 1.72 g cm−3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2 in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm−3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an

  20. Uncertainties in hot-wire measurements of compressible turbulent flows implied by comparisons with laser-induced fluorescence

    Science.gov (United States)

    Mckenzie, R. L.; Logan, P.

    1986-01-01

    A hot-wire anemometer and a new nonintrusive laser-induced fluorescence (LIF) technique are used to survey a Mach 2 turbulent boundary layer. The hot-wire anemometer's ability to accurately measure mass flux, temperature, and density fluctuations in a compressible flow is examined by comparing its results with those obtained using LIF. Several methods of hot-wire calibration are used, and the uncertainties in their measurements of various fluctuating flow parameters are determined. The results show that although a hot-wire operated at high overheat can measure mass flux fluctuations, temperature and density fluctuations are not determined accurately from such measurements. However, a hot-wire operated at multiple overheats can be used to measure static and total temperature fluctuations. The presence of pressure fluctuations and their correlation with density can prevent the use of hot-wire data to determine density fluctuations.

  1. TSS concentration in sewers estimated from turbidity measurements by means of linear regression accounting for uncertainties in both variables.

    Science.gov (United States)

    Bertrand-Krajewski, J L

    2004-01-01

    In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.

  2. 电子计重秤测量结果的不确定度评定%Electronic weight scale evaluation of uncertainty measurement results

    Institute of Scientific and Technical Information of China (English)

    刘海华

    2012-01-01

    This paper mainly introduces the sources uncertainty in the electronic weight scale test, calculation of each component of the standard uncertainty, and combined standard uncertainty and expanded uncertainty of the method through the actual measurement.%本文主要介绍了在电子计重秤试验过程中不确定度来源,并通过实际测量,计算各分量的标准不确定度、合成标准不确定度以及扩展不确定度的方法。

  3. The uncertainty analysis of cigarette circumference gauge measurement%卷烟圆周仪测量结果的不确定度分析

    Institute of Scientific and Technical Information of China (English)

    韦德芬

    2014-01-01

    In order to obtain the measurement efficacy of cigarette circumference exactly, a large number of experimental data and the major factors that influence the uncertainty during the measurement of cigarette circumference were comprehensively analyzed, and the modeling of uncertainty, individual uncertainty analysis and the evaluating process of compound uncertainty and expanded uncertainty of cigarette circumference measurement were described in detail.%为确切了解卷烟圆周的测量质量,通过大量实验数据以及对卷烟圆周检测过程中的不确定度主要影响因素的综合分析。该文详细介绍卷烟圆周不确定度的建模、单项不确定度的分析、合成不确定度、扩展不确定度的评定过程。

  4. Analysis report of common glass container measurement uncertainty%常用玻璃量器测量不确定度分析报告

    Institute of Scientific and Technical Information of China (English)

    谭雯

    2014-01-01

    文章对化验室中常见的分度吸管测量的不确定度进行了评估,分析了测量过程中存在的不确定度来源,并建立了测量过程的数学模型,量化不确定度分量,求出合成不确定度和扩展不确定度。%In this paper,the common measurement uncertainty indexing straw were evaluated in laboratory analyzes the measurement uncertainty sources exist,and a mathematical model of the measurement process,quantify uncertainty components, obtained combined uncertainty and expanded uncertainty.

  5. Measuring the knowledge-based economy of China in terms of synergy among technological, organizational, and geographic attributes of firms

    NARCIS (Netherlands)

    Leydesdorff, L.; Zhou, P.

    2014-01-01

    Using the possible synergy among geographic, size, and technological distributions of firms in the Orbis database, we find the greatest reduction of uncertainty at the level of the 31 provinces of China, and an additional 18.0 % at the national level. Some of the coastal provinces stand out as expec

  6. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiongwei [State Key Lab of Power Systems, Department of Thermal Engineering, Tsinghua-BP Clean Energy Center, Tsinghua University, Beijing (China); Wang, Zhe, E-mail: zhewang@tsinghua.edu.cn [State Key Lab of Power Systems, Department of Thermal Engineering, Tsinghua-BP Clean Energy Center, Tsinghua University, Beijing (China); Lui, Siu-Lung; Fu, Yangting; Li, Zheng [State Key Lab of Power Systems, Department of Thermal Engineering, Tsinghua-BP Clean Energy Center, Tsinghua University, Beijing (China); Liu, Jianming [China Guodian Science and Technology Research Institute, Nanjing 100034 (China); Ni, Weidou [State Key Lab of Power Systems, Department of Thermal Engineering, Tsinghua-BP Clean Energy Center, Tsinghua University, Beijing (China)

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R{sup 2}), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively. - Highlights: • Multiple pairs of lines are used to compensate plasma temperature fluctuations. • Multi-line information is utilized to represent the elemental concentration. • Advantage of PLS algorithm is exploited by the model. • Both of uncertainty reduction and accuracy improvement are achieved.

  7. A Dynamic, Multivariate Sustainability Measure for Robust Analysis of Water Management under Climate and Demand Uncertainty in an Arid Environment

    Directory of Open Access Journals (Sweden)

    Christian Hunter

    2015-10-01

    Full Text Available Considering water resource scarcity and uncertainty in climate and demand futures, decision-makers require techniques for sustainability analysis in resource management. Through unclear definitions of “sustainability”, however, traditional indices for resource evaluation propose options of limited flexibility by adopting static climate and demand scenarios, limiting analysis variables to a particular water-use group and time. This work proposes a robust, multivariate, dynamic sustainability evaluation technique and corresponding performance indicator called Measure of Sustainability (MoS for resource management that is more adapted to withstand future parameter variation. The range of potential future climate and demand scenarios is simulated through a calibrated hydrological model of Copiapó, Chile, a case study example of an arid watershed under extreme natural and anthropogenic water stresses. Comparing MoS and cost rankings of proposed water management schemes, this paper determines that the traditional evaluation method not only underestimates future water deficits, but also espouses solutions without considering uncertainties in supply and demand. Given the uncertainty of the future and the dependence of resources upon climate and market trajectories, the MoS methodology proposes solutions that, while perhaps are not the most optimal, are robust to variations in future parameter values and are thus the best water management options in a stochastic natural world.

  8. New microbiological assay for determination of caspofungin in the presence of its degradation products and its measurement uncertainty.

    Science.gov (United States)

    Ghisleni, Daniela Dal Molim; Okamoto, Rogério Takao; De Oliveira, Amaral Cleide Maria; Lourenço, Felipe Rebello; De Jesus, Andreoli Pinto Terezinha

    2014-01-01

    Caspofungin is an echinocandin antifungal used in the treatment of invasive fungal infections. Several methods have been reported for the quantitative analysis of echinocandins; however, there is no microbiological assay for determination of caspofungin potency in the presence of its degradation products. This study aimed to develop and validate a microbiological method for quantitative analysis of caspofungin in lyophilized powder, evaluate the stability, and determinate the degradation kinetics of the drug when the finished product is submitted to heat stress. A procedure was established to estimate measurement uncertainty for routine analysis. The validation was performed as recommended in the current official guidelines. The agar diffusion method is based on the inhibitory effect of caspofungin on Candida albicans. Results showed selectivity, linearity, precision, and accuracy of the method. Statistical analysis demonstrated that method is linear (in the range 2.5 to 16 microg/mL, y= 15.73 + 6.4x, r2 = 0.9965), precise (intermediate precision: 2.54%), and accurate (recovery range: 95.01-102.46%). The proposed method allowed evaluation of the thermal stability of the drug at 80 degreesC for 120 min and determination of first order degradation kinetics. The variability of inhibition zone sizes was the most important source of uncertainty at about 87% of the overall uncertainty (103.0+/-1.7%). These results show that the proposed method is applicable to routine laboratory testing, and is sensitive to thermal degradation of caspofungin.

  9. Biogenic carbon in combustible waste: Waste composition, variability and measurement uncertainty

    DEFF Research Database (Denmark)

    Larsen, Anna Warberg; Fuglsang, Karsten; Pedersen, Niels H.;

    2013-01-01

    described in the literature. This study addressed the variability of biogenic and fossil carbon in combustible waste received at a municipal solid waste incinerator. Two approaches were compared: (1) radiocarbon dating (14C analysis) of carbon dioxide sampled from the flue gas, and (2) mass and energy...... balance calculations using the balance method. The ability of the two approaches to accurately describe short-term day-to-day variations in carbon emissions, and to which extent these short-term variations could be explained by controlled changes in waste input composition, was evaluated. Finally...... method and the balance method represented promising methods able to provide good quality data for the ratio between biogenic and fossil carbon in waste. The relative uncertainty in the individual experiments was 7–10% (95% confidence interval) for the 14C method and slightly lower for the balance method....

  10. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  11. An Indicator of Research Front Activity: Measuring Intellectual Organization as Uncertainty Reduction in Document Sets

    CERN Document Server

    Lucio-Arias, Diana

    2009-01-01

    When using scientific literature to model scholarly discourse, a research specialty can be operationalized as an evolving set of related documents. Each publication can be expected to contribute to the further development of the specialty at the research front. The specific combinations of title words and cited references in a paper can then be considered as a signature of the knowledge claim in the paper: new words and combinations of words can be expected to represent variation, while each paper is at the same time selectively positioned into the intellectual organization of a field using context-relevant references. Can the mutual information among these three dimensions--title words, cited references, and sequence numbers--be used as an indicator of the extent to which intellectual organization structures the uncertainty prevailing at a research front? The effect of the discovery of nanotubes (1991) on the previously existing field of fullerenes is used as a test case. Thereafter, this method is applied t...

  12. Uncertainty contributions due to different measurement strategies applied to optomechanical hole plate

    DEFF Research Database (Denmark)

    Morace, Renate Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2003-01-01

    The work described in this paper deals with influence parameters in optical measurements, with particular respect to the choice of measurement strategy, which strongly affects the results of measurement. In this investigation, an optomechanical hole plate developed by DTU was measured...

  13. 直流数字电压表的不确定度评定%Uncertainty of Voltage Measurement Results of Digital Multimeter

    Institute of Scientific and Technical Information of China (English)

    李宏伟

    2012-01-01

    DC voltage measure is the base of DC electric quantity measure. The uncertainty of voltage measurement results of digital multimeter is introduced in this paper, and it analyzes the elements affecting the uncertainty of measurement, calculates the every kind of uncertainty components and finally gives the combined standard uncertainty. Thus it can improve the measuring efficiency and accuracy.%详细叙述了直流数字电压表测量结果的不确定度评估,分析了影响测量不确定度的原因,计算了各不确定度分量并给出合成标准不确定度。通过对测量结果不确定度分析,提高了测量准确度和工作效率。

  14. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  15. Standardisation of a European measurement method for the determination of anions and cations in PM2.5: results of field trial campaign and determination of measurement uncertainty.

    Science.gov (United States)

    Beccaceci, Sonya; Brown, Richard J C; Butterfield, David M; Harris, Peter M; Otjes, René P; van Hoek, Caroline; Makkonen, Ulla; Catrambone, Maria; Patier, Rosalía Fernández; Houtzager, Marc M G; Putaud, Jean-Philippe

    2016-12-08

    European Committee for Standardisation (CEN) Technical Committee 264 'Air Quality' has recently produced a standard method for the measurements of anions and cations in PM2.5 within its Working Group 34 in response to the requirements of European Directive 2008/50/EC. It is expected that this method will be used in future by all Member States making measurements of the ionic content of PM2.5. This paper details the results of a field measurement campaign and the statistical analysis performed to validate this method, assess its uncertainty and define its working range to provide clarity and confidence in the underpinning science for future users of the method. The statistical analysis showed that, except for the lowest range of concentrations, the expanded combined uncertainty is expected to be below 30% at the 95% confidence interval for all ions except Cl(-). However, if the analysis is carried out on the lower concentrations found at rural sites the uncertainty can be in excess of 50% for Cl(-), Na(+), K(+), Mg(2+) and Ca(2+). An estimation of the detection limit for all ions was also calculated and found to be 0.03 μg m(-3) or below.

  16. Estimation of the measurement uncertainty in quantitative determination of ketamine and norketamine in urine using a one-point calibration method.

    Science.gov (United States)

    Ma, Yi-Chun; Wang, Che-Wei; Hung, Sih-Hua; Chang, Yan-Zin; Liu, Chia-Reiy; Her, Guor-Rong

    2012-09-01

    An approach was proposed for the estimation of measurement uncertainty for analytical methods based on one-point calibration. The proposed approach is similar to the popular multiple-point calibration approach. However, the standard deviation of calibration was estimated externally. The approach was applied to the estimation of measurement uncertainty for the quantitative determination of ketamine (K) and norketamine (NK) at a 100 ng/mL threshold concentration in urine. In addition to uncertainty due to calibration, sample analysis was the other major source of uncertainty. To include the variation due to matrix effect and temporal effect in sample analysis, different blank urines were spiked with K and NK and analyzed at equal time intervals within and between batches. The expanded uncertainties (k = 2) were estimated to be 10 and 8 ng/mL for K and NK, respectively.

  17. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply to Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to

  18. Uncertainty analysis of heat flux measurements estimated using a one-dimensional, inverse heat-conduction program.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James Thomas; Figueroa, Victor G.; Murphy, Jill E. (Worcester Polytechnic Institute, Worcester, MA)

    2005-02-01

    The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to

  19. SU-D-303-03: Impact of Uncertainty in T1 Measurements On Quantification of Dynamic Contrast Enhanced MRI

    Energy Technology Data Exchange (ETDEWEB)

    Aryal, M; Cao, Y [The University of Michigan, Ann Arbor, MI (United States)

    2015-06-15

    Purpose: Quantification of dynamic contrast enhanced (DCE) MRI requires native longitudinal relaxation time (T1) measurement. This study aimed to assess uncertainty in T1 measurements using two different methods. Methods and Materials: Brain MRI scans were performed on a 3T scanner in 9 patients who had low grade/benign tumors and partial brain radiotherapy without chemotherapy at pre-RT, week-3 during RT (wk-3), end-RT, and 1, 6 and 18 months after RT. T1-weighted images were acquired using gradient echo sequences with 1) 2 different flip angles (50 and 150), and 2) 5 variable TRs (100–2000ms). After creating quantitative T1 maps, average T1 was calculated in regions of interest (ROI), which were distant from tumors and received a total of accumulated radiation doses < 5 Gy at wk-3. ROIs included left and right normal Putamen and Thalamus (gray matter: GM), and frontal and parietal white matter (WM). Since there were no significant or even a trend of T1 changes from pre-RT to wk-3 in these ROIs, a relative repeatability coefficient (RC) of T1 as a measure of uncertainty was estimated in each ROI using the data pre-RT and at wk-3. The individual T1 changes at later time points were evaluated compared to the estimated RCs. Results: The 2-flip angle method produced small RCs in GM (9.7–11.7%) but large RCs in WM (12.2–13.6%) compared to the saturation-recovery (SR) method (11.0–17.7% for GM and 7.5–11.2% for WM). More than 81% of individual T1 changes were within T1 uncertainty ranges defined by RCs. Conclusion: Our study suggests that the impact of T1 uncertainty on physiological parameters derived from DCE MRI is not negligible. A short scan with 2 flip angles is able to achieve repeatability of T1 estimates similar to a long scan with 5 different TRs, and is desirable to be integrated in the DCE protocol. Present study was supported by National Institute of Health (NIH) under grant numbers; UO1 CA183848 and RO1 NS064973.

  20. Considering sampling strategy and cross-section complexity for estimating the uncertainty of discharge measurements using the velocity-area method

    Science.gov (United States)

    Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine

    2016-02-01

    Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the

  1. Materials accounting in a fast-breeder-reactor fuels-reprocessing facility: optimal allocation of measurement uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Dayem, H.A.; Ostenak, C.A.; Gutmacher, R.G.; Kern, E.A.; Markin, J.T.; Martinez, D.P.; Thomas, C.C. Jr.

    1982-07-01

    This report describes the conceptual design of a materials accounting system for the feed preparation and chemical separations processes of a fast breeder reactor spent-fuel reprocessing facility. For the proposed accounting system, optimization techniques are used to calculate instrument measurement uncertainties that meet four different accounting performance goals while minimizing the total development cost of instrument systems. We identify instruments that require development to meet performance goals and measurement uncertainty components that dominate the materials balance variance. Materials accounting in the feed preparation process is complicated by large in-process inventories and spent-fuel assembly inputs that are difficult to measure. To meet 8 kg of plutonium abrupt and 40 kg of plutonium protracted loss-detection goals, materials accounting in the chemical separations process requires: process tank volume and concentration measurements having a precision less than or equal to 1%; accountability and plutonium sample tank volume measurements having a precision less than or equal to 0.3%, a shortterm correlated error less than or equal to 0.04%, and a long-term correlated error less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having a precision less than or equal to 0.4%, a short-term correlated error less than or equal to 0.1%, and a long-term correlated error less than or equal to 0.05%. The effects of process design on materials accounting are identified. Major areas of concern include the voloxidizer, the continuous dissolver, and the accountability tank.

  2. Lessons learnt on biases and uncertainties in personal exposure measurement surveys of radiofrequency electromagnetic fields with exposimeters.

    Science.gov (United States)

    Bolte, John F B

    2016-09-01

    Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and Wi

  3. FIRM: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements

    KAUST Repository

    Agha-mohammadi, A.-a.

    2013-11-15

    In this paper we present feedback-based information roadmap (FIRM), a multi-query approach for planning under uncertainty which is a belief-space variant of probabilistic roadmap methods. The crucial feature of FIRM is that the costs associated with the edges are independent of each other, and in this sense it is the first method that generates a graph in belief space that preserves the optimal substructure property. From a practical point of view, FIRM is a robust and reliable planning framework. It is robust since the solution is a feedback and there is no need for expensive replanning. It is reliable because accurate collision probabilities can be computed along the edges. In addition, FIRM is a scalable framework, where the complexity of planning with FIRM is a constant multiplier of the complexity of planning with PRM. In this paper, FIRM is introduced as an abstract framework. As a concrete instantiation of FIRM, we adopt stationary linear quadratic Gaussian (SLQG) controllers as belief stabilizers and introduce the so-called SLQG-FIRM. In SLQG-FIRM we focus on kinematic systems and then extend to dynamical systems by sampling in the equilibrium space. We investigate the performance of SLQG-FIRM in different scenarios. © The Author(s) 2013.

  4. Standardization of the Definitions of Vertical Resolution and Uncertainty in the NDACC-archived Ozone and Temperature Lidar Measurements

    Science.gov (United States)

    Leblanc, T.; Godin-Beekmann, S.; Payen, Godin-Beekmann; Gabarrot, Franck; vanGijsel, Anne; Bandoro, J.; Sica, R.; Trickl, T.

    2012-01-01

    The international Network for the Detection of Atmospheric Composition Change (NDACC) is a global network of high-quality, remote-sensing research stations for observing and understanding the physical and chemical state of the Earth atmosphere. As part of NDACC, over 20 ground-based lidar instruments are dedicated to the long-term monitoring of atmospheric composition and to the validation of space-borne measurements of the atmosphere from environmental satellites such as Aura and ENVISAT. One caveat of large networks such as NDACC is the difficulty to archive measurement and analysis information consistently from one research group (or instrument) to another [1][2][3]. Yet the need for consistent definitions has strengthened as datasets of various origin (e.g., satellite and ground-based) are increasingly used for intercomparisons, validation, and ingested together in global assimilation systems.In the framework of the 2010 Call for Proposals by the International Space Science Institute (ISSI) located in Bern, Switzerland, a Team of lidar experts was created to address existing issues in three critical aspects of the NDACC lidar ozone and temperature data retrievals: signal filtering and the vertical filtering of the retrieved profiles, the quantification and propagation of the uncertainties, and the consistent definition and reporting of filtering and uncertainties in the NDACC- archived products. Additional experts from the satellite and global data standards communities complement the team to help address issues specific to the latter aspect.

  5. Measurement uncertainty in pulmonary vascular input impedance and characteristic impedance estimated from pulsed-wave Doppler ultrasound and pressure: clinical studies on 57 pediatric patients.

    Science.gov (United States)

    Tian, Lian; Hunter, Kendall S; Kirby, K Scott; Ivy, D Dunbar; Shandas, Robin

    2010-06-01

    Pulmonary vascular input impedance better characterizes right ventricular (RV) afterload and disease outcomes in pulmonary hypertension compared to the standard clinical diagnostic, pulmonary vascular resistance (PVR). Early efforts to measure impedance were not routine, involving open-chest measurement. Recently, the use of pulsed-wave (PW) Doppler-measured velocity to non-invasively estimate instantaneous flow has made impedance measurement more practical. One critical concern remains with clinical use: the measurement uncertainty, especially since previous studies only incorporated random error. This study utilized data from a large pediatric patient population to comprehensively examine the systematic and random error contributions to the total impedance uncertainty and determined the least error prone methodology to compute impedance from among four different methods. We found that the systematic error contributes greatly to the total uncertainty and that one of the four methods had significantly smaller propagated uncertainty; however, even when this best method is used, the uncertainty can be large for input impedance at high harmonics and for the characteristic impedance modulus. Finally, we found that uncertainty in impedance between normotensive and hypertensive patient groups displays no significant difference. It is concluded that clinical impedance measurement would be most improved by advancements in instrumentation, and the best computation method is proposed for future clinical use of the input impedance.

  6. The Uncertainty of Mass Discharge Measurements Using Pumping Methods Under Simplified Conditions

    Science.gov (United States)

    Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures use...

  7. Experimental assesment of optical uncertainty components in the measurement of an optomechanical hole plate

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2004-01-01

    An investigation on the effect of imaging parameters such as the direction of illumination and the magnification of the objective in the optical coordinate measurements is described in this paper. An optomechanical hole plate with 5x5 holes was measured using various configurations of illumination...

  8. Measuring the Higgs boson mass using event-by-event uncertainties

    NARCIS (Netherlands)

    Castelli, A.

    2015-01-01

    The thesis presents a measurement of the properties of the Higgs particle, performed by using the data collected by the ATLAS experiment in 2011 and 2012. The measurement is performed by using a three-dimensional model based on analytic functions to describe the signal produced by the Higgs boson de

  9. 测量不确定度的研究及应用进展%Review of studies on measurement uncertainty research and applications

    Institute of Scientific and Technical Information of China (English)

    张青松

    2013-01-01

    Measurement uncertainty error evaluation theory is an extension of the data processing field has an important position in the measurement field play an important role,has become a new research topic.Measurement uncertainty,is to describe an important indicator of the quality of the measurement results.As a measure of the uncertainty of measurement indicators measuring the level of the world and has been valued by many international organizations.This paper discusses the concept of measurement uncertainty,uncertainty and error is the difference,measurement uncertainty assessment classification and measurement uncertainty Application.%测量不确定度评定系误差评定理论的扩展,在数据处理领域具有重要地位,在测量领域中发挥重要作用,已成为一门新的研究课题。测量不确定度评定,是描述测量结果质量的重要指标。测量不确定度作为一种衡量测量水平的指标已被世界各国及许多国际组织所重视,本文论述了测量不确定度的概念,不确定度与误差的区别,测量不确定度的评定分类,及测量不确定度的应用进展。

  10. Errors and uncertainties in the measurement of ultrasonic wave attenuation and phase velocity.

    Science.gov (United States)

    Kalashnikov, Alexander N; Challis, Richard E

    2005-10-01

    This paper presents an analysis of the error generation mechanisms that affect the accuracy of measurements of ultrasonic wave attenuation coefficient and phase velocity as functions of frequency. In the first stage of the analysis we show that electronic system noise, expressed in the frequency domain, maps into errors in the attenuation and the phase velocity spectra in a highly nonlinear way; the condition for minimum error is when the total measured attenuation is around 1 Neper. The maximum measurable total attenuation has a practical limit of around 6 Nepers and the minimum measurable value is around 0.1 Neper. In the second part of the paper we consider electronic noise as the primary source of measurement error; errors in attenuation result from additive noise whereas errors in phase velocity result from both additive noise and system timing jitter. Quantization noise can be neglected if the amplitude of the additive noise is comparable with the quantization step, and coherent averaging is employed. Experimental results are presented which confirm the relationship between electronic noise and measurement errors. The analytical technique is applicable to the design of ultrasonic spectrometers, formal assessment of the accuracy of ultrasonic measurements, and the optimization of signal processing procedures to achieve a specified accuracy.

  11. Dynamic Length Metrology (DLM) for measurements with sub-micrometre uncertainty in a production environment

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Hansen, Hans Nørgaard; Hattel, Jesper Henri;

    2016-01-01

    Conventional length metrology for traceable accurate measurements requires costly temperature controlled facilities, long waiting time for part acclimatisation, and separate part material characterisation. This work describes a method called Dynamic Length Metrology (DLM) developed to achieve sub...

  12. Uncertainty of Deardorff’s soil moisture model based on continuous TDR measurements for sandy loam soil

    Directory of Open Access Journals (Sweden)

    Brandyk Andrzej

    2016-03-01

    Full Text Available Knowledge on soil moisture is indispensable for a range of hydrological models, since it exerts a considerable influence on runoff conditions. Proper tools are nowadays applied in order to gain in-sight into soil moisture status, especially of uppermost soil layers, which are prone to weather changes and land use practices. In order to establish relationships between meteorological conditions and topsoil moisture, a simple model would be required, characterized by low computational effort, simple structure and low number of identified and calibrated parameters. We demonstrated, that existing model for shallow soils, considering mass exchange between two layers (the upper and the lower, as well as with the atmosphere and subsoil, worked well for sandy loam with deep ground water table in Warsaw conurbation. GLUE (Generalized Likelihood Uncertainty Estimation linked with GSA (Global Sensitivity Analysis provided for final determination of parameter values and model confidence ranges. Including the uncertainty in a model structure, caused that the median soil moisture solution of the GLUE was shifted from the one optimal in deterministic sense. From the point of view of practical model application, the main shortcoming were the underestimated water exchange rates between the lower soil layer (ranging from the depth of 0.1 to 0.2 m below ground level and subsoil. General model quality was found to be satisfactory and promising for its utilization for establishing measures to regain retention in urbanized conditions.

  13. Optical depth measurements by shadow-band radiometers and their uncertainties.

    Science.gov (United States)

    Alexandrov, Mikhail D; Kiedron, Peter; Michalsky, Joseph J; Hodges, Gary; Flynn, Connor J; Lacis, Andrew A

    2007-11-20

    Shadow-band radiometers in general, and especially the Multi-Filter Rotating Shadow-band Radiometer (MFRSR), are widely used for atmospheric optical depth measurements. The major programs running MFRSR networks in the United States include the Department of Energy Atmospheric Radiation Measurement (ARM) Program, U.S. Department of Agriculture UV-B Monitoring and Research Program, National Oceanic and Atmospheric Administration Surface Radiation (SURFRAD) Network, and NASA Solar Irradiance Research Network (SIRN). We discuss a number of technical issues specific to shadow-band radiometers and their impact on the optical depth measurements. These problems include instrument tilt and misalignment, as well as some data processing artifacts. Techniques for data evaluation and automatic detection of some of these problems are described.

  14. TRACEABILITY OF PRECISION MEASUREMENTS ON COORDINATE MEASURING MACHINES – UNCERTAINTY ASSESSMENT BY USING CALIBRATED WORPIECES ON CMMs

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    This document is used in connection with one exercise 30 minutes duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercise concerns establishment of traceability of precision measurements on coordinate measuring machines. This document contains...

  15. EMC传导发射测试测量不确定度评定%Measurement uncertainty of EMC test conducted emission

    Institute of Scientific and Technical Information of China (English)

    王化吉

    2012-01-01

    According to the laboratory test environment and test instrument,this paper assess the measurement uncertainty of GJB152A-97 related test item based on the standard JFF1059-1999"Evaluation and Expression of Uncertainty in Measure-ment"and"Guidance on Evaluating the Uncertainty in Electromagnetic Interference Measurement", and get a reasonable and accurate of the uncertainty of measurement. Through the analysis of uncertainty for the influence of various factors, the paper give several suggestions and provide ideas for laboratory in reasonable to minimize uncertainty while testing.%根据实验室测试环境及测试仪器,依据标准JFF1059-1999《测量不确定度的评定与表示》及CNAS-GL01:2006《电磁干扰测量中不确定度的评定指南》,对GJB152A-97中的相关测试项进行测量不确定度评定,得到了合理、准确的测量不确定度.通过分析影响不确定度的各种因素,提出几点建议,为实验室在进行测试时合理减小测量不确定度提供思路.

  16. Design of a machine for the universal non-contact measurement of large free-form optics with 30 nm uncertainty

    NARCIS (Netherlands)

    Henselmans, R.; Rosielle, P.C.J.N.; Steinbuch, M.; Saunders, I.; Bergmans, R.

    2005-01-01

    A new universal non-contact measurement machine design for measuring free-form optics with 30 nm expanded uncertainty is presented. In the cylindrical machine concept, an optical probe with 5 mm range is positioned over the surface by a motion system. Due to a 2nd order error effect when measuring s

  17. Evaluating the Uncertainties in the Electron Temperature and Radial Speed Measurements Using White Light Corona Eclipse Observations

    Science.gov (United States)

    Reginald, Nelson L.; Davilla, Joseph M.; St. Cyr, O. C.; Rastaetter, Lutz

    2014-01-01

    We examine the uncertainties in two plasma parameters from their true values in a simulated asymmetric corona. We use the Corona Heliosphere (CORHEL) and Magnetohydrodynamics Around the Sphere (MAS) models in the Community Coordinated Modeling Center (CCMC) to investigate the differences between an assumed symmetric corona and a more realistic, asymmetric one. We were able to predict the electron temperatures and electron bulk flow speeds to within +/-0.5 MK and +/-100 km s(exp-1), respectively, over coronal heights up to 5.0 R from Sun center.We believe that this technique could be incorporated in next-generation white-light coronagraphs to determine these electron plasma parameters in the low solar corona. We have conducted experiments in the past during total solar eclipses to measure the thermal electron temperature and the electron bulk flow speed in the radial direction in the low solar corona. These measurements were made at different altitudes and latitudes in the low solar corona by measuring the shape of the K-coronal spectra between 350 nm and 450 nm and two brightness ratios through filters centered at 385.0 nm/410.0 nm and 398.7 nm/423.3 nm with a bandwidth of is approximately equal to 4 nm. Based on symmetric coronal models used for these measurements, the two measured plasma parameters were expected to represent those values at the points where the lines of sight intersected the plane of the solar limb.

  18. Evaluation on Uncertainty of Measurement Result of Pressure Transmitter Field Verification

    Institute of Scientific and Technical Information of China (English)

    CHEN; Ping

    2015-01-01

    Calibration data of pressure instrument,pressure transmitter and pressure measurement control system with field pressure calibrator on the site of production and work can represent actual situation of the production and work,reduce and avoid the error on account of the difference of

  19. Minimizing measurement uncertainties of coniferous needle-leaf optical properties, part I: methodological review

    NARCIS (Netherlands)

    Yanez Rausell, L.; Schaepman, M.E.; Clevers, J.G.P.W.; Malenovsky, Z.

    2014-01-01

    Optical properties (OPs) of non-flat narrow plant leaves, i.e., coniferous needles, are extensively used by the remote sensing community, in particular for calibration and validation of radiative transfer models at leaf and canopy level. Optical measurements of such small living elements are, howeve

  20. 同位素丰度绝对测量及相对原子质量测定中的不确定度评估%Uncertainty Analysis of Absolute Measurement of Isotopic Abundances and Relative Atomic Mass

    Institute of Scientific and Technical Information of China (English)

    周涛; 王同兴

    2005-01-01

    The sources of uncertainty of relative atomic mass include measurement errors and isotopic fractionation of terrestrial samples. Measurement errors are composed of measurements of atomic masses and isotopic abundances, the later includes uncertainty of correction factor K and isotopic ratios of natural samples. Through differential of seven factors to gain their propagation factors, the uncertainty of correction factors K can be calculated. With the same differential calculation, the uncertainty of relative atomic mass can be obtained.

  1. Uncertainty measures of rough set based on conditional possibility%基于条件概率的粗糙集不确定性度量

    Institute of Scientific and Technical Information of China (English)

    黄国顺; 曾凡智; 文翰

    2015-01-01

    By a semantic analysis to the uncertainty measure of rough set, an improved axiomatic definition of the uncertainty measure for the rough set is proposed. Firstly, based on the analysis of mathematical characters of axiomatic definition, two new uncertainty measures based on conditional possibility are proposed. Then, it is proved that they are the uncertainty measures under the axiomatic definition, and the corresponding uncertainty measuring formulas of knowledge are derived, respectively. It is found that one of them is just the existing conditional information entropy, the other has a complement relationship with the certainty measure. An example is given to compare the uncertainty measures, which illustrates that the proposed formulas are consistent with the semantics of uncertainty for the rough set.%通过语义分析,提出一种修正的粗糙集不确定性度量公理化定义。首先,对该定义的数学特征进行分析,提出两种基于条件概率的粗糙集不确定性度量方法;然后,证明它们满足所提出的公理化定义,并导出相应的知识不确定性度量,发现其中一个是现有条件信息熵,另一个与确定性度量形成互补关系。设计算例对各种不确定性度量进行比较分析,验证了所提出的度量公式与不确定性语义保持一致。

  2. An indirect accuracy calibration and uncertainty evaluation method for large scale inner dimensional measurement system

    Science.gov (United States)

    Liu, Bai-Ling; Qu, Xing-Hua

    2013-10-01

    In view of present problem of low accuracy, limited range and low automaticity existing in the large-scale diameter inspection instrument, a precise measuring system (robot) was designed based on laser displacement sensor for large-scale inner diameter in this paper. Since the traditional measuring tool of the robot is expensive and hard to manufacture, an indirect calibration method is proposed. In this study, the system eccentric error is calibrated by ring gauge of laboratory. An experiment, which changes the installed order of located rods to introduce located rods' eccentric error, is designed to test whether the spindle eccentric error remains unchanged. The experiment result shows the variation of spindle's eccentricity after changing rods is within 0.02mm. Due to the spindle is an unchanged part of robot, based on Φ584 series robot calibrated by ring gauge, other series robot can be deduced combining with the length of extended arm.

  3. An outdoor radon survey and minimizing the uncertainties in low level measurements using CR-39 detectors.

    Science.gov (United States)

    Gunning, G A; Pollard, D; Finch, E C

    2014-06-01

    Long term outdoor radon measurements were recorded in Ireland using CR-39 track etch detectors. A measurement protocol was designed for this study, which was optimized for the relatively low radon concentrations expected outdoors. This protocol included pre-etching the detectors before exposure to allow radon tracks to be more easily distinguished from background. The average outdoor radon concentration for the Republic of Ireland was found to be 5.6 ± 0.7 Bq m(-3). A statistically significant difference between inland and coastal radon concentrations was evident but no difference between mean radon concentrations on the east coast and those on the west coast was observed.

  4. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01