Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification.
Energy Technology Data Exchange (ETDEWEB)
Liu, Zhen; Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; van Bloemen Waanders, Bart Gustaaf; LaFranchi, Brian; Ivey, Mark D.; Schrader, Paul E.; Michelsen, Hope A.; Bambha, Ray
2014-09-01
In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO 2 . This will allow for the examination of regional-scale transport and distribution of CO 2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO 2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO 2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF assimilated meteorology fields, making it
Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2014-09-01
In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO_{2} . This will allow for the examination of regional-scale transport and distribution of CO_{2} along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO_{2} inversions. We have tested the approach using data and model outputs from the TransCom3 global CO_{2} inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF
Bartley, David; Lidén, Göran
2008-08-01
The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Uranium Measurements and Attributes
International Nuclear Information System (INIS)
It may be necessary to find the means to determine unclassified attributes of uranium in nuclear weapons or their components for future transparency initiatives. We briefly describe the desired characteristics of attribute measurement systems for transparency. The determination of uranium attributes; in particular, by passive gamma-ray detection is a formidable challenge
The attribute measurement technique
International Nuclear Information System (INIS)
Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. An information barrier (IB) is included in the measurement system to protect the potentially classified information while allowing sufficient information transfer to occur for the monitoring party to gain confidence that the material being measured is consistent with the host's declarations, concerning that material. The attribute measurement technique incorporates an IB and addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The attribute measurement technique has been implemented in the AVNG, an attribute measuring system described in other presentations at this conference. In this presentation, we will discuss four techniques used in the AVNG: (1) the 1B, (2) the attribute measurement technique, (3) the use of open and secure modes to increase confidence in the displayed results, and (4) the joint design as a method for addressing both host and monitor needs.
Measurement Uncertainty and Probability
Willink, Robin
2013-02-01
Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Measurement uncertainty relations
Energy Technology Data Exchange (ETDEWEB)
Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Attempting Measurement of Psychological Attributes
Salzberger, Thomas
2013-01-01
Measures of psychological attributes abound in the social sciences as much as measures of physical properties do in the physical sciences. However, there are crucial differences between the scientific underpinning of measurement. While measurement in the physical sciences is supported by empirical evidence that demonstrates the quantitative nature of the property assessed, measurement in the social sciences is, in large part, made possible only by a vague, discretionary definition of measurem...
The Uncertainty of Measurement Results
International Nuclear Information System (INIS)
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Entropic uncertainty and measurement reversibility
Berta, Mario; Wehner, Stephanie; Wilde, Mark M.
2016-07-01
The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.
Incentive salience attribution under reward uncertainty: A Pavlovian model.
Anselme, Patrick
2015-02-01
There is a vast literature on the behavioural effects of partial reinforcement in Pavlovian conditioning. Compared with animals receiving continuous reinforcement, partially rewarded animals typically show (a) a slower development of the conditioned response (CR) early in training and (b) a higher asymptotic level of the CR later in training. This phenomenon is known as the partial reinforcement acquisition effect (PRAE). Learning models of Pavlovian conditioning fail to account for it. In accordance with the incentive salience hypothesis, it is here argued that incentive motivation (or 'wanting') plays a more direct role in controlling behaviour than does learning, and reward uncertainty is shown to have an excitatory effect on incentive motivation. The psychological origin of that effect is discussed and a computational model integrating this new interpretation is developed. Many features of CRs under partial reinforcement emerge from this model.
Uncertainty Quantification for Safeguards Measurements
International Nuclear Information System (INIS)
Part of the scientific method requires all calculated and measured results to be accompanied by a description that meets user needs and provides an adequate statement of the confidence one can have in the results. The scientific art of generating quantitative uncertainty statements is closely related to the mathematical disciplines of applied statistics, sensitivity analysis, optimization, and inversion, but in the field of non-destructive assay, also often draws heavily on expert judgment based on experience. We call this process uncertainty quantification, (UQ). Philosophical approaches to UQ along with the formal tools available for UQ have advanced considerably over recent years and these advances, we feel, may be useful to include in the analysis of data gathered from safeguards instruments. This paper sets out what we hope to achieve during a three year US DOE NNSA research project recently launched to address the potential of advanced UQ to improve safeguards conclusions. By way of illustration we discuss measurement of uranium enrichment by the enrichment meter principle (also known as the infinite thickness technique), that relies on gamma counts near the 186 keV peak directly from 235U. This method has strong foundations in fundamental physics and so we have a basis for the choice of response model — although in some implementations, peak area extraction may result in a bias when applied over a wide dynamic range. It also allows us to describe a common but usually neglected aspect of applying a calibration curve, namely the error structure in the predictors. We illustrate this using a combination of measured data and simulation. (author)
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
Uncertainty of temperature measurement with thermal cameras
Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw
2001-06-01
All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.
Measuring, Estimating, and Deciding under Uncertainty.
Michel, Rolf
2016-03-01
The problem of uncertainty as a general consequence of incomplete information and the approach to quantify uncertainty in metrology is addressed. Then, this paper discusses some of the controversial aspects of the statistical foundation of the concepts of uncertainty in measurements. The basics of the ISO Guide to the Expression of Uncertainty in Measurement as well as of characteristic limits according to ISO 11929 are described and the needs for a revision of the latter standard are explained. PMID:26688360
Errors and Uncertainty in Physics Measurement.
Blasiak, Wladyslaw
1983-01-01
Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…
Ricciuto, D. M.
2015-12-01
Although much progress has been made in the past decade in constraining the net North American terrestrial carbon flux, considerable uncertainty remains in the sink magnitude and trend. Terrestrial carbon cycle models are increasing in spatial resolution, complexity and predictive skill, allowing for increased process-level understanding and attribution of net carbon fluxes to specific causes. Here we examine the various sources of uncertainty, including driver uncertainty, model parameter uncertainty, and structural uncertainty, and the contribution of each type uncertainty to the net sink, and the attribution of this sink to anthropogenic causes: Increasing CO2 concentrations, nitrogen deposition, land use change, and changing climate. To examine driver and parameter uncertainty, model simulations are performed using the Community Land Model version 4.5 (CLM4.5) with literature-based parameter ranges and three different reanalysis meteorological forcing datasets. We also examine structural uncertainty thorough analysis of the Multiscale Terrestrial Model Intercomparison (MsTMIP). Identififying major sources of uncertainty can help to guide future observations, experiments, and model development activities.
Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David
2016-04-01
One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Energy Technology Data Exchange (ETDEWEB)
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Measurement Theory, Nomological Machine And Measurement Uncertainties (In Classical Physics
Directory of Open Access Journals (Sweden)
Ave Mets
2012-12-01
Full Text Available Measurement is said to be the basis of exact sciences as the process of assigning numbers to matter (things or their attributes, thus making it possible to apply the mathematically formulated laws of nature to the empirical world. Mathematics and empiria are best accorded to each other in laboratory experiments which function as what Nancy Cartwright calls nomological machine: an arrangement generating (mathematical regularities. On the basis of accounts of measurement errors and uncertainties, I will argue for two claims: 1 Both fundamental laws of physics, corresponding to ideal nomological machine, and phenomenological laws, corresponding to material nomological machine, lie, being highly idealised relative to the empirical reality; and also laboratory measurement data do not describe properties inherent to the world independently of human understanding of it. 2 Therefore the naive, representational view of measurement and experimentation should be replaced with a more pragmatic or practice-based view.
Measuring the uncertainty of tapping torque
DEFF Research Database (Denmark)
Belluco, Walter; De Chiffre, Leonardo
An uncertainty budget is carried out for torque measurements performed at the Institut for Procesteknik for the evaluation of cutting fluids. Thirty test blanks were machined with one tool and one fluid, torque diagrams were recorded and the repeatability of single torque measurements was estimated...... and utilized as influence parameter in the evaluation of the uncertainty budget...
Unsharpness of generalized measurement and its effects in entropic uncertainty relations
Baek, Kyunghyun; Son, Wonmin
2016-01-01
Under the scenario of generalized measurements, it can be questioned how much of quantum uncertainty can be attributed to measuring device, independent of the uncertainty in the measured system. On the course to answer the question, we suggest a new class of entropic uncertainty relation that differentiates quantum uncertainty from device imperfection due to the unsharpness of measurement. In order to quantify the unsharpness, we {suggest} and analyze the quantity that characterizes the uncer...
Uncertainty of measurement: an immunology laboratory perspective.
Beck, Sarah C; Lock, Robert J
2015-01-01
'Measurement uncertainty of measured quantity values' (ISO15189) requires that the laboratory shall determine the measurement uncertainty for procedures used to report measured quantity values on patients' samples. Where we have numeric data measurement uncertainty can be expressed as the standard deviation or as the co-efficient of variation. However, in immunology many of the assays are reported either as semi-quantitative (i.e. an antibody titre) or qualitative (positive or negative) results. In the latter context, measuring uncertainty is considerably more difficult. There are, however, strategies which can allow us to minimise uncertainty. A number of parameters can contribute to making measurements uncertain. These include bias, precision, standard uncertainty (expressed as standard deviation or coefficient of variation), sensitivity, specificity, repeatability, reproducibility and verification. Closely linked to these are traceability and standardisation. In this article we explore the challenges presented to immunology with regard to measurement uncertainty. Many of these challenges apply equally to other disciplines working with qualitative or semi-quantitative data.
Uncertainty estimation of ultrasonic thickness measurement
International Nuclear Information System (INIS)
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Uncertainty of dose measurement in radiation processing
DEFF Research Database (Denmark)
Miller, A.
1996-01-01
The major standard organizations of the world have addressed the issue of reporting uncertainties in measurement reports and certificates. There is, however, still some ambiguity in the minds of many people who try to implement the recommendations in real life. This paper is a contribution...... that contribute to the observable uncertainty of repeated measurements and those that do not. Examples of the use of these principles are presented in the paper....... to the running debate and presents the author's view, which is based upon experience in radiation processing dosimetry. The origin of all uncertainty components must be identified and can be classified according to Type A and Type B, but it is equally important to separate the uncertainty components into those...
An approach to multi-attribute utility analysis under parametric uncertainty
International Nuclear Information System (INIS)
The techniques of cost-benefit analysis and multi-attribute analysis provide a useful basis for informing decisions in situations where a number of potentially conflicting opinions or interests need to be considered, and where there are a number of possible decisions that could be adopted. When the input data to such decision-making processes are uniquely specified, cost-benefit analysis and multi-attribute utility analysis provide unambiguous guidance on the preferred decision option. However, when the data are not uniquely specified, application and interpretation of these techniques is more complex. Herein, an approach to multi-attribute utility analysis (and hence, as a special case, cost-benefit analysis) when input data are subject to parametric uncertainty is presented. The approach is based on the use of a Monte Carlo technique, and has recently been applied to options for the remediation of former uranium mining liabilities in a number of Central and Eastern European States
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
Teaching Measurement and Uncertainty the GUM Way
Buffler, Andy; Allie, Saalih; Lubben, Fred
2008-01-01
This paper describes a course aimed at developing understanding of measurement and uncertainty in the introductory physics laboratory. The course materials, in the form of a student workbook, are based on the probabilistic framework for measurement as recommended by the International Organization for Standardization in their publication "Guide to…
Uncertainty Measures of Regional Flood Frequency Estimators
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik
1995-01-01
Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...
Using MINITAB software for teaching measurement uncertainty
International Nuclear Information System (INIS)
The concept of measurement uncertainty should be regarded not only related to the concept of doubt about the validity of the measurement result, but also to the quantization of this concept. In this sense the measurement uncertainty is that parameter associated with the result characterizing the dispersion of the values that could reasonably be assigned to the measurand (or more properly to its representation through a model). This parameter may be for example a multiple of the standard deviation but especially, and more importantly, the half width of an interval with a predetermined level of confidence or trust. In these terms in this paper I attempt, with the help of MINITAB software, to analyze this parameter; with simple and quick operations to evaluate the mean, the standard deviation and the confidence interval and by the use of several plotted graphs
Uncertainties in the attribution of greenhouse gas warming and implications for climate prediction
Jones, Gareth S.; Stott, Peter A.; Mitchell, John F. B.
2016-06-01
Using optimal detection techniques with climate model simulations, most of the observed increase of near-surface temperatures over the second half of the twentieth century is attributed to anthropogenic influences. However, the partitioning of the anthropogenic influence to individual factors, such as greenhouse gases and aerosols, is much less robust. Differences in how forcing factors are applied, in their radiative influence and in models' climate sensitivities, substantially influence the response patterns. We find that standard optimal detection methodologies cannot fully reconcile this response diversity. By selecting a set of experiments to enable the diagnosing of greenhouse gases and the combined influence of other anthropogenic and natural factors, we find robust detections of well-mixed greenhouse gases across a large ensemble of models. Of the observed warming over the twentieth century of 0.65 K/century we find, using a multimodel mean not incorporating pattern uncertainty, a well-mixed greenhouse gas warming of 0.87 to 1.22 K/century. This is partially offset by cooling from other anthropogenic and natural influences of -0.54 to -0.22 K/century. Although better constrained than recent studies, the attributable trends across climate models are still wide, with implications for observational constrained estimates of transient climate response. Some of the uncertainties could be reduced in future by having more model data to better quantify the simulated estimates of the signals and natural variability, by designing model experiments more effectively and better quantification of the climate model radiative influences. Most importantly, how model pattern uncertainties are incorporated into the optimal detection methodology should be improved.
Quantifying uncertainty in nuclear analytical measurements
International Nuclear Information System (INIS)
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Methods for Attribute Measurement and Alternatives to Multiplicity Counting
International Nuclear Information System (INIS)
The Attribute Measurement System with Information Barrier (AMS/IB) specification is being developed in support of the Defense Threat Reduction Agency's (DTRA's) Cooperative Threat Reduction (CTR) program for the Mayak Fissile Material Storage Facility. This document discusses the technologies available for attribute measurement, and advantages and disadvantages of alternatives
Inconclusive quantum measurements and decisions under uncertainty
Yukalov, V I
2016-01-01
We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a ge...
Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry
Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien
2015-04-01
Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a
Uncertainties in the attribution of greenhouse gas warming and implications for climate prediction
Jones, Gareth S; Mitchell, John F B
2016-01-01
Using optimal detection techniques with climate model simulations, most of the observed increase of near surface temperatures over the second half of the twentieth century is attributed to anthropogenic influences. However, the partitioning of the anthropogenic influence to individual factors, such as greenhouse gases and aerosols, is much less robust. Differences in how forcing factors are applied, in their radiative influence and in models' climate sensitivities, substantially influence the response patterns. We find standard optimal detection methodologies cannot fully reconcile this response diversity. By selecting a set of experiments to enable the diagnosing of greenhouse gases and the combined influence of other anthropogenic and natural factors, we find robust detections of well mixed greenhouse gases across a large ensemble of models. Of the observed warming over the 20th century of 0.65K/century we find, using a multi model mean not incorporating pattern uncertainty, a well mixed greenhouse gas warm...
Improving Attribute-Importance Measurement : a Reference-Point Approach
Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.
2004-01-01
Despite the importance of identifying the hierarchy of product attributes that drive judgment and choice, the many available methods remain limited regarding their convergent validity and test-retest reliability. To increase the validity and reliability of attribute-importance measurement, we focus
The Validity of Attribute-Importance Measurement: A Review
Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.
2007-01-01
A critical review of the literature demonstrates a lack of validity among the ten most common methods for measuring the importance of attributes in behavioral sciences. The authors argue that one of the key determinants of this lack of validity is the multi-dimensionality of attribute importance. Bu
Wiegmann, Daniel D; Weinersmith, Kelly L; Seubert, Steven M
2010-04-01
The behavior of females in search of a mate determines the likelihood that high quality males are encountered and adaptive search strategies rely on the effective use of available information on the quality of prospective mates. The sequential search strategy was formulated, like most models of search behavior, on the assumption that females obtain perfect information on the quality of encountered males. In this paper, we modify the strategy to allow for uncertainty of male quality and we determine how the magnitude of this uncertainty and the ability of females to inspect multiple male attributes to reduce uncertainty influence mate choice decisions. In general, searchers are sensitive to search costs and higher costs lower acceptance criteria under all versions of the model. The choosiness of searchers increases with the variability of the quality of prospective mates under conditions of the original model, but under conditions of uncertainty the choosiness of searchers may increase or decrease with the variability of inspected male attributes. The behavioral response depends on the functional relationship between observed male attributes and the fitness return to searchers and on costs associated with the search process. Higher uncertainty often induces searchers to pay more for information and under conditions of uncertainty the fitness return to searchers is never higher than under conditions of the original model. Further studies of the performance of alternative search strategies under conditions of uncertainty may consequently be necessary to identify search strategies likely to be used under natural conditions.
ATTRIBUTES AND THRESHOLDS IN MEASUREMENTS FOR TRANSPARENCY INITIATIVES
Energy Technology Data Exchange (ETDEWEB)
M. W. JOHNSON
2000-09-01
The collection of programs broadly termed Transparency Initiatives frequently involves physics measurements that are applied to items with sensitive or classified properties. The inability or reluctance to perform quantitative measurements, in the safeguards tradition, to such items, and then to expose the results to international examination, has impelled development of an attributes approach to measurements, following the philosophy if it looks like a duck, walks like a duck and quacks like a duck, call it a duck, This approach avoids certain of the classification issues that would otherwise be associated with such measurements. Use of the attributes approach, however, continues to pose problems of interpretation, in light of the need to establish numerical thresholds whereby data obtained from the measurements can be evaluated to determine whether the attribute is present. In this paper we examine the foundations of the attributes approach and the steps used to determine appropriate attributes and thresholds, using examples from contemporary threat-reduction initiatives where possible. Implications for the detector technologies used in the measurements will be discussed, as will the characteristics of so-called information barriers intended to prevent inadvertent release of sensitive information during attributes measurements.
Uncertainties in pipeline water percentage measurement
Energy Technology Data Exchange (ETDEWEB)
Scott, Bentley N.
2005-07-01
Measurement of the quantity, density, average temperature and water percentage in petroleum pipelines has been an issue of prime importance. The methods of measurement have been investigated and have seen continued improvement over the years. Questions are being asked as to the reliability of the measurement of water in the oil through sampling systems originally designed and tested for a narrow range of densities. Today most facilities sampling systems handle vastly increased ranges of density and types of crude oils. Issues of pipeline integrity, product loss and production balances are placing further demands on the issues of accurate measurement. Water percentage is one area that has not received the attention necessary to understand the many factors involved in making a reliable measurement. A previous paper1 discussed the issues of uncertainty of the measurement from a statistical perspective. This paper will outline many of the issues of where the errors lie in the manual and automatic methods in use today. A routine to use the data collected by the analyzers in the on line system for validation of the measurements will be described. (author) (tk)
Inconclusive quantum measurements and decisions under uncertainty
Directory of Open Access Journals (Sweden)
Vyacheslav I. Yukalov
2016-04-01
Full Text Available We give a mathematical definition for the notion of inconclusive quantum measurements.In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy withthe theory of quantum measurements, the inconclusive quantum measurements correspond,in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluationof the considered prospect, and of an attraction factor, characterizing irrational,subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example,we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.
Inconclusive quantum measurements and decisions under uncertainty
Yukalov, Vyacheslav; Sornette, Didier
2016-04-01
We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.
MEASUREMENT UNCERTAINTY ANALYSIS OF DIFFERENT CNC MACHINE TOOLS MEASUREMENT SYSTEMS
Directory of Open Access Journals (Sweden)
Leszek Semotiuk
2013-09-01
Full Text Available In this paper the results of measurement uncertainty tests conducted with a Heidenhain TS 649 probe on CNC machine tools are presented. In addition, identification and analysis of random and systematic errors of measurement were presented. Analyses were performed on the basis of measurements taken on two different CNC machine tools with Heidenhain control system. The evaluated errors were discussed and compensation procedures were proposed. The obtained results were described in tables and figures.
Measurement uncertainties in science and technology
Grabe, Michael
2014-01-01
This book recasts the classical Gaussian error calculus from scratch, the inducements concerning both random and unknown systematic errors. The idea of this book is to create a formalism being fit to localize the true values of physical quantities considered – true with respect to the set of predefined physical units. Remarkably enough, the prevailingly practiced forms of error calculus do not feature this property which however proves in every respect, to be physically indispensable. The amended formalism, termed Generalized Gaussian Error Calculus by the author, treats unknown systematic errors as biases and brings random errors to bear via enhanced confidence intervals as laid down by students. The significantly extended second edition thoroughly restructures and systematizes the text as a whole and illustrates the formalism by numerous numerical examples. They demonstrate the basic principles of how to understand uncertainties to localize the true values of measured values - a perspective decisive in vi...
Measurement Uncertainty for Finite Quantum Observables
Directory of Open Access Journals (Sweden)
René Schwonnek
2016-06-01
Full Text Available Measurement uncertainty relations are lower bounds on the errors of any approximate joint measurement of two or more quantum observables. The aim of this paper is to provide methods to compute optimal bounds of this type. The basic method is semidefinite programming, which we apply to arbitrary finite collections of projective observables on a finite dimensional Hilbert space. The quantification of errors is based on an arbitrary cost function, which assigns a penalty to getting result x rather than y, for any pair ( x , y . This induces a notion of optimal transport cost for a pair of probability distributions, and we include an Appendix with a short summary of optimal transport theory as needed in our context. There are then different ways to form an overall figure of merit from the comparison of distributions. We consider three, which are related to different physical testing scenarios. The most thorough test compares the transport distances between the marginals of a joint measurement and the reference observables for every input state. Less demanding is a test just on the states for which a “true value” is known in the sense that the reference observable yields a definite outcome. Finally, we can measure a deviation as a single expectation value by comparing the two observables on the two parts of a maximally-entangled state. All three error quantities have the property that they vanish if and only if the tested observable is equal to the reference. The theory is illustrated with some characteristic examples.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Review of Prior U.S. Attribute Measurement Systems
Energy Technology Data Exchange (ETDEWEB)
White, G K
2012-07-06
Attribute Measurement Systems have been developed and demonstrated several times in the United States over the last decade or so; under the Trilateral Initiative (1996-2002), FMTTD (Fissile Material Transparency Technology Demonstration, 2000), and NG-AMS (Next Generation Attribute Measurement System, 2005-2008). Each Attribute Measurement System has contributed to the growing body of knowledge regarding the use of such systems in warhead dismantlement and other Arms Control scenarios. The Trilateral Initiative, besides developing prototype hardware/software, introduced the topic to the international community. The 'trilateral' parties included the United States, the Russian Federation, and the International Atomic Energy Agency (IAEA). With the participation of a Russian delegation, the FMTTD demonstrated that measurements behind an information barrier are feasible while meeting host party security requirements. The NG-AMS system explored the consequences of maximizing the use of Commercial off the Shelf (COTS) equipment, which made construction easier, but authentication harder. The 3rd Generation Attribute Measurement System (3G-AMS) will further the scope of previous systems by including additional attributes and more rigor in authentication.
Attribute measurement systems prototypes and equipment in the United States
International Nuclear Information System (INIS)
Since the fall of 1997, the United States has been developing prototypical attribute verification technology for potential use by the International Atomic Energy Agency (IAEA) under the Trilateral Initiative. The first attribute measurement equipment demonstration took place in December 1997 at the Lawrence Livermore National Laboratory. This demonstration led to a series of joint Russian Federatioin/US/IAEA technical discussions that focused on attribute measurement technology that could be applied to plutonium bearing items having classified characteristics. A first prototype attribute verification system with an information barrier was demonstrated at a Trilateral Technical Workshop in June 1999 at Los Alamos. This prototype nourished further fruitful discussions between the three parties that has in turn led to the documents discussed in a previous paper. Prototype development has continued in the US, under other initiatives, using an integrated approach that includes the Trilatleral Initiative. Specifically for the Trilateral Initiative, US development has turned to some peripheral equipment that would support verifications by the IAEA. This equipment includes an authentication tool for measurement systems with information barriers and in situ probes that would facilitate inspections by reducing the need to move material out of storage locations for reverification. In this paper, we will first summarize the development of attribute verification measurement system technology in the US and then report on the status of the development of other equipment to support the Trilateral Initiative.
Attribute measure recognition approach and its applications to emitter recognition
Institute of Scientific and Technical Information of China (English)
GUAN Xin; HE You; YI Xiao
2005-01-01
This paper studies the emitter recognition problem. A new recognition method based on attribute measure for emitter recognition is put forward. The steps of the method are presented. The approach to determining the weight coefficient is also discussed. Moreover, considering the temporal redundancy of emitter information detected by multi-sensor system, this new recognition method is generalized to multi-sensor system. A method based on the combination of attribute measure and D-S evidence theory is proposed. The implementation of D-S reasoning is always restricted by basic probability assignment function. Constructing basic probability assignment function based on attribute measure is presented in multi-sensor recognition system. Examples of recognizing the emitter purpose and system are selected to demonstrate the method proposed. Experimental results show that the performance of this new method is accurate and effective.
Automating Measurement for Software Process Models using Attribute Grammar Rules
Directory of Open Access Journals (Sweden)
Abdul Azim Abd. Ghani
2007-08-01
Full Text Available The modelling concept is well accepted in software engineering discipline. Some software models are built either to control the development stages, to measure program quality or to serve as a medium that gives better understanding of the actual software systems. Software process modelling nowadays has reached a level that allow software designs to be transformed into programming languages, such as architecture design language and unified modelling language. This paper described the adaptation of attribute grammar approach in measuring software process model. A tool, called Software Process Measurement Application was developed to enable the measurement accordingly to specified attribute grammar rules. A context-free grammar to read the process model is depicted from IDEF3 standard, and rules were attached to enable the measurement metrics calculation. The measurement metric values collected were used to aid in determining the decomposing and structuring of processes for the proposed software systems.
Quantum measurement and uncertainty relations in photon polarization
Edamatsu, Keiichi
2016-07-01
Recent theoretical and experimental studies have given raise to new aspects in quantum measurements and error-disturbance uncertainty relations. After a brief review of these issues, we present an experimental test of the error-disturbance uncertainty relations in photon polarization measurements. Using a generalized, strength-variable measurement of a single photon polarization state, we experimentally evaluate the error and disturbance in the measurement process and demonstrate the validity of recently proposed uncertainty relations.
Using a Meniscus to Teach Uncertainty in Measurement
Backman, Philip
2008-01-01
I have found that students easily understand that a measurement cannot be exact, but they often seem to lack an understanding of why it is important to know "something" about the magnitude of the uncertainty. This tends to promote an attitude that almost any uncertainty value will do. Such indifference may exist because once an uncertainty is…
Adaptive framework for uncertainty analysis in electromagnetic field measurements.
Prieto, Javier; Alonso, Alonso A; de la Rosa, Ramón; Carrera, Albano
2015-04-01
Misinterpretation of uncertainty in the measurement of the electromagnetic field (EMF) strength may lead to an underestimation of exposure risk or an overestimation of required measurements. The Guide to the Expression of Uncertainty in Measurement (GUM) has internationally been adopted as a de facto standard for uncertainty assessment. However, analyses under such an approach commonly assume unrealistic static models or neglect relevant prior information, resulting in non-robust uncertainties. This study proposes a principled and systematic framework for uncertainty analysis that fuses information from current measurements and prior knowledge. Such a framework dynamically adapts to data by exploiting a likelihood function based on kernel mixtures and incorporates flexible choices of prior information by applying importance sampling. The validity of the proposed techniques is assessed from measurements performed with a broadband radiation meter and an isotropic field probe. The developed framework significantly outperforms GUM approach, achieving a reduction of 28% in measurement uncertainty.
Hwang, Rong-Jen; Rogers, Craig; Beltran, Jada; Razatos, Gerasimos; Avery, Jason
2016-06-01
Reporting a measurement of uncertainty helps to determine the limitations of the method of analysis and aids in laboratory accreditation. This laboratory has conducted a study to estimate a reasonable uncertainty for the mass concentration of vaporous ethanol, in g/210 L, by the Intoxilyzer(®) 8000 breath analyzer. The uncertainty sources used were: gas chromatograph (GC) calibration adjustment, GC analytical, certified reference material, Intoxilyzer(®) 8000 calibration adjustment and Intoxilyzer(®) 8000 analytical. Standard uncertainties attributed to these sources were calculated and separated into proportional and constant standard uncertainties. Both the combined proportional and the constant standard uncertainties were further combined to an expanded uncertainty as both a percentage and an unit. To prevent any under reporting of the expanded uncertainty, 0.10 g/210 L was chosen as the defining point for expressing the expanded uncertainty. For the Intoxilyzer(®) 8000, all vaporous ethanol results at or above 0.10 g/210 L, the expanded uncertainty will be reported as ±3.6% at a confidence level of 95% (k = 2); for vaporous ethanol results below 0.10 g/210 L, the expanded uncertainty will be reported as ±0.0036 g/210 L at a confidence level of 95% (k = 2).
Measurement uncertainty of lactase-containing tablets analyzed with FTIR.
Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia
2014-01-01
Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve uncertainty level. Uncertainty arising from the FTIR instrument was uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.
Uncertainty budget for optical coordinate measurements of circle diameter
DEFF Research Database (Denmark)
Morace, Renate Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo
2004-01-01
An uncertainty analysis for circle diameter measurements using a coordinate measuring machine (CMM) equipped with an optical probe is presented in this paper. A mathematical model for data evaluation and uncertainty assessment was formulated in accordance with Guide to the Expression of Uncertainty...... in Measurement (GUM). Various input quantities such as CCD camera resolution, influence of illuminating system, CMM errors etc. were considered in the model function and experimentally investigated....
The uncertainties of magnetic properties measurements of electrical sheet steel
Ahlers, H
2000-01-01
In this work, uncertainties in measurements of magnetic properties of Epstein- and single-sheet samples have been determined according to the 'Guide To The Expression Of Uncertainty In Measurement', [International Organization for Standardization (1993)]. They were calculated for the results at predicted values of parameters taking into account the non-linear dependences. The measurement results and the uncertainties are calculated simultaneously by a computer program.
Evaluation of an attributive measurement system in the automotive industry
Simion, C.
2016-08-01
Measurement System Analysis (MSA) is a critical component for any quality improvement process. MSA is defined as an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability and it falls into two categories: attribute and variable. Most problematic measurement system issues come from measuring attribute data, which are usually the result of human judgment (visual inspection). Because attributive measurement systems are often used in some manufacturing processes, their assessment is important to obtain the confidence in the inspection process, to see where are the problems in order to eliminate them and to guide the process improvement. It was the aim of this paper to address such a issue presenting a case study made in a local company from the Sibiu region supplying products for the automotive industry, specifically the bag (a technical textile component, i.e. the fabric) for the airbag module. Because defects are inherent in every manufacturing process and in the field of airbag systems a minor defect can influence their performance and lives depend on the safety feature, there is a stringent visual inspection required on the defects of the bag material. The purpose of this attribute MSA was: to determine if all inspectors use the same criteria to determine “pass” from “fail” product (i.e. the fabric); to assess company inspection standards against customer's requirements; to determine how well inspectors are conforming to themselves; to identify how inspectors are conforming to a “known master,” which includes: how often operators ship defective product, how often operators dispose of acceptable product; to discover areas where training is required, procedures must be developed and standards are not available. The results were analyzed using MINITAB software with its module called Attribute Agreement Analysis. The conclusion was that the inspection process must
Uncertainty Measures in Ordered Information System Based on Approximation Operators
Directory of Open Access Journals (Sweden)
Bingjiao Fan
2014-01-01
Full Text Available This paper focuses on constructing uncertainty measures by the pure rough set approach in ordered information system. Four types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness, approximation quality, approximation accuracy, dependency degree, and importance degree are investigated. Theoretical analysis indicates that all the four types can be used to evaluate the uncertainty in ordered information system, especially that we find that the essence of the first type and the third type is the same. To interpret and help understand the approach, experiments about real-life data sets have been conducted to test the four types of uncertainty measures. From the results obtained, it can be shown that these uncertainty measures can surely measure the uncertainty in ordered information system.
International Target Values for Measurement Uncertainties in Nuclear Material Accountancy
Institute of Scientific and Technical Information of China (English)
LIU; Hong-bin; GAO; Qiang
2012-01-01
<正>The IAEA has published a revised version International Target Values (ITVs) 2010 for Measurement Uncertainties in Safeguarding Nuclear Materials in 2010. The report proposes the international target values of measurement uncertainties of the routine measurement methods for the nuclear material accountancy.
Uncertainty Quantification for Quantitative Imaging Holdup Measurements
Energy Technology Data Exchange (ETDEWEB)
Bevill, Aaron M [ORNL; Bledsoe, Keith C [ORNL
2016-01-01
In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.
Plutonium Attribute Estimation From Passive NMIS Measurements at VNIIEF
International Nuclear Information System (INIS)
Currently, the most relevant application of NMIS for plutonium attribute estimation stems from measurements performed jointly by Oak Ridge National Laboratory (ORNL) and Russian Federal Nuclear Center, All-Russia Scientific Research Institute of Experimental Physics (RFNC-VNIIEF) personnel at RFNC-VNIIEF facilities in Sarov, Russia in June and July 2000. During these measurements at VNIIEF, NMIS was applied in its passive mode to eight unclassified plutonium spherical shells. The shells' properties spanned the following ranges: Composition: (delta)-phase plutonium-metal, constant; Relative 240Pu-content (f240Pu): f240Pu = 1.77% (g 240Pu/g Pu), constant; Inner radius (r1): 10.0 mm (le) r1 (le) 53.5 mm, mean r1 33.5 mm; Outer radius (r2): 31.5 mm (le) r2 (le) 60.0 mm, mean r2 = 46.6 mm; Radial thickness (Δr): 6.4 mm (le) Δr (le) 30.2 mm, mean Δr = 13.1 mm; and Plutonium mass (mPu): 1829 g (le) mPu (le) 4468 g, mean mPu = 3265 g. The features of these measurements were analyzed to extract the attributes of each plutonium shell. Given that the samples measured were of constant composition, geometry, and relative 240Pu-content, each shell is completely described by any two of the following four properties: Inner radius r1; Outer radius r2; Mass m, one of 239Pu mass m239Pu, 240Pu mass m240Pu, or total Pu mass mPu; and Radial thickness Δr. Of these, generally only mass is acknowledged as an attribute of interest; the second property (whichever is chosen) can be considered to be a parameter of the attribute-estimation procedure, much as multiplication is a parameter necessary to accurately estimate fissile mass via most neutron measurements
Use of Commericially Available Software in an Attribute Measurement System
International Nuclear Information System (INIS)
A major issue in international safeguards of nuclear materials is the ability to verify that processes and materials in nuclear facilities are consistent with declaration without revealing sensitive information. An attribute measurement system (AMS) is a non-destructive assay (NDA) system that utilizes an information barrier to protect potentially sensitive information about the measurement item. A key component is the software utilized for operator interface, data collection, analysis, and attribute determination, as well as the operating system under which they are implemented. Historically, custom software has been used almost exclusively in transparency applications, and it is unavoidable that some amount of custom software is needed. The focus of this paper is to explore the extent to which commercially available software may be used and the relative merits.
Use of Commericially Available Software in an Attribute Measurement System.
Energy Technology Data Exchange (ETDEWEB)
MacArthur, D. W. (Duncan W.); Bracken, D. S. (David S.); Carrillo, L. A. (Louis A.); Elmont, T. H. (Timothy H.); Frame, K. C. (Katherine C.); Hirsch, K. L. (Karen L.)
2005-01-01
A major issue in international safeguards of nuclear materials is the ability to verify that processes and materials in nuclear facilities are consistent with declaration without revealing sensitive information. An attribute measurement system (AMS) is a non-destructive assay (NDA) system that utilizes an information barrier to protect potentially sensitive information about the measurement item. A key component is the software utilized for operator interface, data collection, analysis, and attribute determination, as well as the operating system under which they are implemented. Historically, custom software has been used almost exclusively in transparency applications, and it is unavoidable that some amount of custom software is needed. The focus of this paper is to explore the extent to which commercially available software may be used and the relative merits.
Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.
Meyer, Veronika R
2003-09-01
Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.
Image Reinforcement or Impairment: The Effects of Co-Branding on Attribute Uncertainty
Tansev Geylani; J. Jeffrey Inman; Frenkel Ter Hofstede
2008-01-01
Co-branding is often used by companies to reinforce the image of their brands. In this paper, we investigate the conditions under which a brand's image is reinforced or impaired as a result of co-branding, and the characteristics of a good partner for a firm considering co-branding for image reinforcement. We address these issues by conceptualizing attribute beliefs as two-dimensional constructs: The first dimension reflects the expected value of the attribute, while the second dimension refl...
Dimensional measurements with submicrometer uncertainty in production environment
DEFF Research Database (Denmark)
De Chiffre, L.; Gudnason, M. M.; Madruga, D.
2015-01-01
The work concerns a laboratory investigation of a method to achieve dimensional measurements with submicrometer uncertainty under conditions that are typical of a production environment. The method involves the concurrent determination of dimensions and material properties from measurements carried...... gauge blocks along with their uncertainties were estimated directly from the measurements. The length of the two workpieces at the reference temperature of 20 °C was extrapolated from the measurements and compared to certificate values. The investigations have documented that the developed approach...... and laboratory equipment allow traceable length measurements with expanded uncertainties (k=2) below 1 μm....
UNCERTAINTY AND ITS IMPACT ON THE QUALITY OF MEASUREMENT
Directory of Open Access Journals (Sweden)
Adel Elahdi M. Yahya
2012-01-01
Full Text Available The imposition of practice, the current world, the laboratory measurement, calibration should be approved by points of credit to national or international and should be compatible with the requirements specification (ISO 17025 for the adoption of efficient laboratories. Those requirements were included the testing process or scale limits to doubt that mentioned in the measurement certificate, which recognizes the customer to achieve quality and efficiency in the process of measurement. In this study we would theoretically try to clarify, indicate what the uncertainty in the measurement, standard types of uncertainty and how to calculate the budget of uncertainty as we should show some examples of how the scientific calculation of the budget challenge with some measure the lengths of the laboratory. After analyzing the results we had found during the measurement using CMM, we had found that the value of non-statistical uncertainty in the measurement type (b piece length of one meter was Â±1.9257 Âµm. and when using the configuration measuring device, we had gotten the value of the extended standard combined uncertainty Â±2.030 Âµm when measured the screws value of 1.2707 mm. When used the configuration measuring device, we had gotten the value of the extended standard combined uncertainty Â±2.030 Âµm when measuring the screws value of 1.2707 mm. We concluded that the impact of uncertainty on the measured results a high fineness degree and less impact on the smoothness of a piece with low fineness, careful calibration of measuring instrument Careful calibration of measuring instrument and equipment by measurement standard is of the utmost importance and equipment by measurement standard is of the utmost importance and laboratories must calculate the uncertainty budget as a part of measurement evaluation to provide high quality measurement results.
Strain gauge measurement uncertainties on hydraulic turbine runner blade
International Nuclear Information System (INIS)
Strains experimentally measured with strain gauges can differ from those evaluated using the Finite Element (FE) method. This difference is due mainly to the assumptions and uncertainties inherent to each method. To circumvent this difficulty, we developed a numerical method based on Monte Carlo simulations to evaluate measurement uncertainties produced by the behaviour of a unidirectional welded gauge, its position uncertainty and its integration effect. This numerical method uses the displacement fields of the studied part evaluated by an FE analysis. The paper presents a study case using in situ data measured on a hydraulic turbine runner. The FE analysis of the turbine runner blade was computed, and our numerical method used to evaluate uncertainties on strains measured at five locations with welded strain gauges. Then, measured strains and their uncertainty ranges are compared to the estimated strains. The uncertainty ranges obtained extended from 74 με to 165 με. Furthermore, the biases observed between the median of the uncertainty ranges and the FE strains varied from −36 to 36 με. Note that strain gauge measurement uncertainties depend mainly on displacement fields and gauge geometry.
Extended component importance measures considering aleatory and epistemic uncertainties
Sallak, Mohamed; Schon, Walter; Aguirre, Felipe
2013-01-01
International audience In this paper, extended component importance measures (Birnbaum importance, RAW, RRW and Crit- icality importance) considering aleatory and epistemic uncertainties are introduced. The D-S theory which is considered to be a less restricted extension of probability theory is proposed as a framework for taking into account both aleatory and epistemic uncertainties. The epistemic uncertainty defined in this paper is the total lack of knowledge of the component state. The...
Estimating the measurement uncertainty in forensic blood alcohol analysis.
Gullberg, Rod G
2012-04-01
For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.
OPEN PUBLIC SPACE ATTRIBUTES AND CATEGORIES – COMPLEXITY AND MEASURABILITY
Directory of Open Access Journals (Sweden)
Ljiljana Čavić
2014-12-01
Full Text Available Within the field of architectural and urban research, this work addresses the complexity of contemporary public space, both in a conceptual and concrete sense. It aims at systematizing spatial attributes and their categories and discussing spatial complexity and measurability, all this in order to reach a more comprehensive understanding, description and analysis of public space. Our aim is to improve everyday usage of open public space and we acknowledged users as its crucial factor. There are numerous investigations on the complex urban and architectural reality of public space that recognise importance of users. However, we did not find any that would holistically account for what users find essential in public space. Based on the incompleteness of existing approaches on open public space and the importance of users for their success, this paper proposes a user-orientated approach. Through an initial survey directed to users, we collected the most important aspects of public spaces in the way that contemporary humans see them. The gathered data is analysed and coded into spatial attributes from which their role in the complexity of open public space and measurability are discussed. The work results in an inventory of attributes that users find salient in public spaces. It does not discuss their qualitative values or contribution in generating spatial realities. It aims to define them clearly so that any further logical argumentation on open space concerning users may be solidly constructed. Finally, through categorisation of attributes it proposes the disciplinary levels necessary for the analysis of complex urban-architectural reality
Assessment of dose measurement uncertainty using RisøScan
DEFF Research Database (Denmark)
Helt-Hansen, J.; Miller, A.
2006-01-01
The dose measurement uncertainty of the dosimeter system RisoScan, office scanner and Riso B3 dosimeters has been assessed by comparison with spectrophotometer measurements of the same dosimeters. The reproducibility and the combined uncertainty were found to be approximately 2% and 4......%, respectively, at one standard deviation. The subroutine in RisoScan for electron energy measurement is shown to give results that are equivalent to the measurements with a scanning spectrophotometer. (c) 2006 Elsevier Ltd. All rights reserved....
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Point-In-Time Measurement Uncertainty Recapture for RCS Flow
Energy Technology Data Exchange (ETDEWEB)
Jung, Byung Ryul; Jang, Ho Cheol; Yune, Seok Jeong; Kim, Eun Kee [Korea Power Engineering and Construction Company, Inc., Daejeon (Korea, Republic of)
2014-10-15
In nuclear power plants, RCS flow measurement uncertainty plays an important role in the establishment of flow acceptance criteria. The narrow band of acceptance criteria based on the design limiting uncertainty of the measured RCS flow may lead to a point-in-time violation of acceptance criteria in a situation where the measured flow is too close to the upper limit of allowable RCS flow operating band. Also the measured RCS flow may approach the lower limit of the acceptance criteria as operating cycle proceeds. Several measurement uncertainty recapturing methods for RCS flow are attempted to be applied in a point-in-time situation failed to meet the acceptance criteria. Also a combination of these recapturing methods can be utilized to establish a design limiting measurement uncertainty. To recapture the RCS flow measurement uncertainty, possible and practical methods are proposed to be utilized in a point-in-time situation failed to meet the acceptance criteria. These methods can be used as a design basis methodology to establish the design limiting uncertainty. It is worthy to note that the hot and cold leg temperatures have an additional redundancy such as wide range instrument channel. The measured operating condition for RCS flow has potential for more recapture. With those recapturing ways more applied, the uncertainty recapture can be improved.
Attributes measurements by calorimetry in 15 to 30 minutes
International Nuclear Information System (INIS)
An analysis of the early portion of the power-history data collected with both of the IAEA's air-cooled bulk calorimeters has demonstrated that such calorimeters can measure the power from preheated containers of plutonium oxide with an accuracy of 2-5% in 15 to 30 minutes. Material accountancy at plutonium facilities has a need for such a capability for measurement of Pu scrap. Also, the IAEA could use just two calorimeters and a gamma-ray assay system for reliable variables and attributes measurements of plutonium mass during a two-day physical-inventory verification (PIV) at a mixed-oxide (MOX) fuel-fabrication facility. The assay results would be free of the concerns about sample moisture, impurities, and geometry that previously have limited the accuracy of assays based on neutron measurements
Energy Technology Data Exchange (ETDEWEB)
Li, Shuai; Xiong, Lihua; Li, Hongyi; Leung, Lai-Yung R.; Demissie, Yonas
2016-01-08
Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities to runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.
Designing a 3rd generation, authenticatable attribute measurement system
International Nuclear Information System (INIS)
Attribute measurement systems (AMS) are designed to measure potentially sensitive items containing Special Nuclear Materials to determine if the items possess attributes which fall within an agreed-upon range. Such systems could be used in a treaty to inspect and verify the identity of items in storage without revealing any sensitive information associated with the item. An AMS needs to satisfy two constraints: the host party needs to be sure that none of their sensitive information is released, while the inspecting party wants to have confidence that the limited amount of information they see accurately reflects the properties of the item being measured. The former involves 'certifying' the system and the latter 'authenticating' it. Previous work into designing and building AMS systems have focused more on the questions of certifiability than on the questions of authentication - although a few approaches have been investigated. The next step is to build a 3rd generation AMS which (1) makes the appropriate measurements, (2) can be certified, and (3) can be authenticated (the three generations). This paper will discuss the ideas, options, and process of producing a design for a 3rd generation AMS.
Instrumental measurement of beer taste attributes using an electronic tongue
Energy Technology Data Exchange (ETDEWEB)
Rudnitskaya, Alisa, E-mail: alisa.rudnitskaya@gmail.com [Chemistry Department, University of Aveiro, Aveiro (Portugal); Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Polshin, Evgeny [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Kirsanov, Dmitry [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Lammertyn, Jeroen; Nicolai, Bart [BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip [Centre for Malting and Brewing Sciences, Katholieke Universiteit Leuven, Heverelee (Belgium); Legin, Andrey [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation)
2009-07-30
The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body.
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
Uncertainty and sensitivity analysis and its applications in OCD measurements
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
Estimation of measurement uncertainty arising from manual sampling of fuels.
Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios
2013-02-15
Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty.
International Nuclear Information System (INIS)
The largest source of uncertainty in the calculation of reactor power can be attributed to the limited accuracy and potential for undetected degradation of conventional flow nozzles and venturis used for feedwater flow measurement. UFM installations have been carried out with regulatory approval in PWRs and BWRs in the USA, regulatory approval is being progressed for trial installations on commercial nuclear units in Japan, and installations are being considered for PHWRs in Canada. Installations use permanently mounted chordal measurement transducer arrays in laboratory calibrated pipe spools to achieve a measurement accuracy of ±0.28%. In addition to high accuracy, measurement systems have evolved to be highly reliable, with redundancy and self-checking features built in to eliminate failures and the potential for drift and inadvertent overpower conditions. Outputs can be used for thermal power measurement and for feedwater flow process control. Measurement frequency can be set to be compatible with existing systems for thermal power measurement and process control. Contributors to thermal power measurement uncertainty are examined, and the range of potential measurement uncertainty recapture (MUR) is identified. Using industry-accepted practices to carry out MUR calculations, the available thermal power uprate can be predicted. Based on the combined uncertainty of all of the process parameters used in on-line thermal power calculations and the uncertainty assumed in the original licensing basis, available thermal power uprates vary between 1.5 and 2.5% of full power (FP). As the year-to-year power demand in Canada increases, nuclear energy continues to play an essential role in providing secure, stable and affordable electricity. Nuclear energy remains cost-competitive compared to other energy resources while eliminating greenhouse gas emissions. In the last decade, great progress has been achieved in developing new technologies applicable to NPPs, especially
Directory of Open Access Journals (Sweden)
Danuta Owczarek
2015-08-01
Full Text Available The paper presents a method for estimating the uncertainty of optical coordinate measurement based on the use of information about the geometry and the size of measured object as well as information about the measurement system, i.e. maximum permissible error (MPE of the machine, selection of a sensor, and also the required measurement accuracy, the number of operators, measurement strategy and external conditions contained in the developed uncertainty database. Estimation of uncertainty is done with the use of uncertainties of measurements of basic geometry elements determined by methods available in the Laboratory of Coordinate Metrology at Cracow University of Technology (LCM CUT (multi-position, comparative and developed in the LCM CUT method dedicated for non-contact measurements and then with the use of them to determine the uncertainty of a given measured object. Research presented in this paper are aimed at developing a complete database containing all information needed to estimate the measurement uncertainty of various objects, even of a very complex geometry based on previously performed measurements.
Measurement uncertainty of isotopologue fractions in fluxomics determined via mass spectrometry.
Guerrasio, R; Haberhauer-Troyer, C; Steiger, M; Sauer, M; Mattanovich, D; Koellensperger, G; Hann, S
2013-06-01
Metabolic flux analysis implies mass isotopomer distribution analysis and determination of mass isotopologue fractions (IFs) of proteinogenic amino acids of cell cultures. In this work, for the first time, this type of analysis is comprehensively investigated in terms of measurement uncertainty by calculating and comparing budgets for different mass spectrometric techniques. The calculations addressed amino acids of Pichia pastoris grown on 10% uniformly (13)C labeled glucose. Typically, such experiments revealed an enrichment of (13)C by at least one order of magnitude in all proteinogenic amino acids. Liquid chromatography-time-of-flight mass spectrometry (LC-TOFMS), liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) analyses were performed. The samples were diluted to fit the linear dynamic range of the mass spectrometers used (10 μM amino acid concentration). The total combined uncertainties of IFs as well as the major uncertainty contributions affecting the IFs were determined for phenylalanine, which was selected as exemplary model compound. A bottom-up uncertainty propagation was performed according to Quantifying Uncertainty in Analytical Measurement and using the Monte Carlo method by considering all factors leading to an IF, i.e., the process of measurement and the addition of (13)C-glucose. Excellent relative expanded uncertainties (k = 1) of 0.32, 0.75, and 0.96% were obtained for an IF value of 0.7 by LC-MS/MS, GC-MS, and LC-TOFMS, respectively. The major source of uncertainty, with a relative contribution of 20-80% of the total uncertainty, was attributed to the signal intensity (absolute counts) uncertainty calculated according to Poisson counting statistics, regardless which of the mass spectrometry platforms was used. Uncertainty due to measurement repeatability was of importance in LC-MS/MS, showing a relative contribution up to 47% of the total uncertainty, whereas for GC-MS and LC
An entropic uncertainty principle for positive operator valued measures
Rumin, Michel
2011-01-01
Extending a recent result by Frank and Lieb, we show an entropic uncertainty principle for mixed states in a Hilbert space relatively to pairs of positive operator valued measures that are independent in some sense.
Directory of Open Access Journals (Sweden)
Miroslav Badida
2008-06-01
Full Text Available Identification of the noise measuring uncertainties by declared measured values is unconditionally necessary and required by legislative. Uncertainty of the measurements expresses all errors that accrue during the measuring. B y indication of uncertainties the measure documents that the objective value is with certain probability found in the interval that is bounded by the measurement uncertainty. The paper deals with the methodology of the uncertainty calculation by noise measurements in living and working environments. metal processing industry and building materials industry.
Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements
Directory of Open Access Journals (Sweden)
Mohammad Sadeghi Sarcheshmah
2012-01-01
Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.
On the Uncertainty Principle for Continuous Quantum Measurement
Miao, Haixing
2016-01-01
We revisit the Heisenberg uncertainty principle for continuous quantum measurement with the detector describable by linear response. When the detector is at the quantum limit with minimum uncertainty, the fluctuation and response of a single-input single-output detector are shown to be related via two equalities. We illustrate the result by applying it to an optomechanical device--a typical continuous measurement setup.
Attributes and templates from active measurements with 252Cf
International Nuclear Information System (INIS)
Active neutron interrogation is useful for the detection of shielded HEU and could also be used for Pu. In an active technique, fissile material is stimulated by an external neutron source to produce fission with the emanation of neutrons and gamma rays. The time distribution of particles leaving the fissile material is measured with respect to the source emission in a variety of ways. A variety of accelerator and radioactive sources can be used. Active interrogation of nuclear weapons/components can be used in two ways: template matching or attribute estimation. Template matching compares radiation signatures with known reference signatures and for treaty applications has the problem of authentication of the reference signatures along with storage and retrieval of templates. Attribute estimation determines, for example, the fissile mass from various features of the radiation signatures and does not require storage of radiation signatures but does require calibration, which can be repeated as necessary. A nuclear materials identification system (NMIS) has been in use at the Oak Ridge Y-12 Plant for verification of weapons components being received and in storage by template matching and has been used with calibrations for attribute (fissile mass) estimation for HEU metal. NMIS employs a 252Cf source of low intensity (6 n/sec) such that the dose at 1 m is approximately twice that on a commercial airline at altitude. The use of such a source presents no significant safety concerns either for personnel or nuclear explosive safety, and has been approved for use at the Pantex Plant on fully assembled weapons systems
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Energy Technology Data Exchange (ETDEWEB)
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
Uncertainty relation for simultaneous measurements in a thermal environment
Energy Technology Data Exchange (ETDEWEB)
Heese, Raoul; Freyberger, Matthias [Institut fuer Quantenphysik, Universitaet Ulm, D-89069 Ulm (Germany)
2014-07-01
Uncertainty relations for simultaneous measurements of conjugate observables date back to the theory of Arthurs and Kelly, who considered a model of two pointer systems, which are coupled to a quantum system to be measured and act as the measurement apparatus. We extend this classic model by including a thermal environment in which the pointers behave as coupled particles under Brownian motion. In this sense the pointers behave like classical measurement devices. This novel approach leads us to a new kind of uncertainty relation for so-called open pointer-based simultaneous measurements of conjugate observables.
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
Energy Technology Data Exchange (ETDEWEB)
Campos, E [Argonne National Laboratory; Sisterson, DL [Argonne National Laboratory
2015-10-01
The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily-accessible, well-articulated estimate of ARM measurement uncertainty is needed.
ANALYSIS OF UNCERTAINTY MEASUREMENT IN ATOMIC ABSORPTION SPECTROPHOTOMETER
Directory of Open Access Journals (Sweden)
NEHA S.MAHAJAN
2012-05-01
Full Text Available A spectrophotometer is a photometer that can measure intensity as a function of the light source wavelength. The important features of spectrophotometers are spectral bandwidth and linear range of absorption or reflectance measurement. Atomic absorption spectroscopy (AAS is a very common technique for detecting chemical composition of elements in metal and its alloy. It is very reliable and simple to use. Quality of result (accuracy depends on the uncertainty of measurement value of the test. If uncertainty of measurement is more there may be doubt of about the final result. The final result of Atomic Absorption Spectrophotometer gets affected by the number of parameters; we should take in to account will calculating the final result. This paper deal with the methodology of evaluating the uncertainty of measurement of chemical composition using AAS. The study is useful for quality of measurement equipment and testing process.
Chapter 12: Uncertainty in measured water quality data
Water quality assessment, management, and regulation continue to rely on measured water quality data, in spite of advanced modeling capabilities. However, very little information is available on one very important component of the measured data - the inherent measurement uncertainty. Although all ...
Teaching Scientific Measurement and Uncertainty in Elementary School
Munier, Valérie; Merle, Hélène; Brehelin, Danie
2013-01-01
The concept of measurement is fundamental in science. In order to be meaningful, the value of a measurement must be given with a certain level of uncertainty. In this paper we try to identify and develop the reasoning of young French pupils about measurement variability. In France, official instructions for elementary school thus argue for having…
Measurement uncertainty in pharmaceutical analysis and its application
Institute of Scientific and Technical Information of China (English)
Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourençon
2014-01-01
The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products.
Measurement Uncertainty Investigation in the Multi-probe OTA Setups
DEFF Research Database (Denmark)
Fan, Wei; Szini, Istvan Janos; Foegelle, M. D.;
2014-01-01
metrics in real world scenarios, the multi-probe based method has attracted huge interest from both industry and academia. This contribution attempts to identify some of the measurement uncertainties of the practical multi-probe setups and provide some guidance to establish the multi-probe anechoic...... chamber setup. This contribution presents the results of uncertainty measurements carried out in three practical multi-probe setups. Some sources of measurement errors, i.e. cable effect, cable termination, etc. are identified based on the measurement results....
Measurement uncertainties physical parameters and calibration of instruments
Gupta, S V
2012-01-01
This book fulfills the global need to evaluate measurement results along with the associated uncertainty. In the book, together with the details of uncertainty calculations for many physical parameters, probability distributions and their properties are discussed. Definitions of various terms are given and will help the practicing metrologists to grasp the subject. The book helps to establish international standards for the evaluation of the quality of raw data obtained from various laboratories for interpreting the results of various national metrology institutes in an international inter-comparisons. For the routine calibration of instruments, a new idea for the use of pooled variance is introduced. The uncertainty calculations are explained for (i) independent linear inputs, (ii) non-linear inputs and (iii) correlated inputs. The merits and limitations of the Guide to the Expression of Uncertainty in Measurement (GUM) are discussed. Monte Carlo methods for the derivation of the output distribution from the...
Evaluation of measuring results, statement of uncertainty in dosimeter calibrations
International Nuclear Information System (INIS)
The method described starts from the requirement that the quantitative statement of a measuring result in dosimetry should contain at least three figures: 1) the measured value or the best estimate of the quantity to be measured, 2) the uncertainty of this value given by a figure, which indicates a certain range around the measured value, and which is strongly linked with 3) a figure for the confidence level of this range, i.e. the probability that the (unknown) correct value is embraced by the given uncertainty range. How the figures 2) and 3) can be obtained and how they should be quoted in calibration certificates is the subject of these lectures. In addition, the means by which the method may be extended on determining the uncertainty of a measurement performed under conditions which deviate from the calibration conditt ions is briefly described. (orig.)
[Estimation of uncertainty of measurement in clinical biochemistry].
Enea, Maria; Hristodorescu, Cristina; Schiriac, Corina; Morariu, Dana; Mutiu, Tr; Dumitriu, Irina; Gurzu, B
2009-01-01
The uncertainty of measurement (UM) or measurement uncertainty is known as the parameter associated with the result of a measurement. Repeated measurements usually reveal slightly different results for the same analyte, sometimes a little higher, sometimes a little lower, because the results of a measurement are depending not only by the analyte itself, but also, by a number of error factors that could give doubts about the estimate. The uncertainty of the measurement represent the quantitative, mathematically expression of this doubt. UM is a range of measured values which is probably to enclose the true value of the measured. Calculation of UM for all types of laboratories is regularized by the ISO Guide to the Expression of Uncertainty in Measurement (abbreviated GUM) and the SR ENV 13005 : 2003 (both recognized by European Accreditation). Even if the GUM rules about UM estimation are very strictly, the offering of the result together with UM will increase the confidence of customers (patients or physicians). In this study the authors are presenting the possibilities of UM assessing in labs from our country by using the data obtained in the procedures of methods validation, during the internal and external quality control.
Immersive Data Comprehension: Visualizing Uncertainty in Measurable Models
Directory of Open Access Journals (Sweden)
Pere eBrunet
2015-09-01
Full Text Available Recent advances in 3D scanning technologies have opened new possibilities in a broad range of applications includingcultural heritage, medicine, civil engineering and urban planning. Virtual Reality systems can provide new tools toprofessionals that want to understand acquired 3D models. In this paper, we review the concept of data comprehension with an emphasis on visualization and inspection tools on immersive setups. We claim that in most application fields, data comprehension requires model measurements which in turn should be based on the explicit visualization of uncertainty. As 3D digital representations are not faithful, information on their fidelity at local level should be included in the model itself as uncertainty bounds. We propose the concept of Measurable 3D Models as digital models that explicitly encode local uncertainty bounds related to their quality. We claim that professionals and experts can strongly benefit from immersive interaction through new specific, fidelity-aware measurement tools which can facilitate 3D data comprehension. Since noise and processing errors are ubiquitous in acquired datasets, we discuss the estimation, representation and visualization of data uncertainty. We show that, based on typical user requirements in Cultural Heritage and other domains, application-oriented measuring tools in 3D models must consider uncertainty and local error bounds. We also discuss the requirements of immersive interaction tools for the comprehension of huge 3D and nD datasets acquired from real objects.
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
Stigsson, Martin; Munier, Raymond
2013-07-01
Measurements of structure orientations are afflicted with uncertainties which arise from many sources. Commonly, such uncertainties involve instrument imprecision, external disturbances and human factors. The aggregated uncertainty depends on the uncertainty of each of the sources. The orientation of an object measured in a borehole (e.g. a fracture) is calculated using four parameters: the bearing and inclination of the borehole and two relative angles of the measured object to the borehole. Each parameter may be a result of one or several measurements. The aim of this paper is to develop a method to both calculate and visualize the aggregated uncertainty resulting from the uncertainty in each of the four geometrical constituents. Numerical methods were used to develop a VBA-application in Microsoft Excel to calculate the aggregated uncertainty. The code calculates two different representations of the aggregated uncertainty: a 1-parameter uncertainty, the ‘minimum dihedral angle’, denoted by Ω; and, a non-parametric visual representation of the uncertainty, denoted by χ. The simple 1-parameter uncertainty algorithm calculates the minimum dihedral angle accurately, but overestimates the probability space that plots as an ellipsoid on a lower hemisphere stereonet. The non-parametric representation plots the uncertainty probability space accurately, usually as a sector of an annulus for steeply inclined boreholes, but is difficult to express numerically. The 1-parameter uncertainty can be used for evaluating statistics of large datasets whilst the non-parametric representation is useful when scrutinizing single or a few objects.
Measurement uncertainty evaluation of conicity error inspected on CMM
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Measurement uncertainty analysis on laser tracker combined with articulated CMM
Zhao, Hui-ning; Yu, Lian-dong; Du, Yun; Zhang, Hai-yan
2013-10-01
The combined measurement technology plays an increasingly important role in the digitalized assembly. This paper introduces a combined measurement system consists of a Laser tracker and a FACMM,with the applications in the inspection of the position of the inner parts in a large-scale device. When these measurement instruments are combined, the resulting coordinate data set contains uncertainties that are a function of the base data sets and complex interactions between the measurement sets. Combined with the characteristics of Laser Tracker and Flexible Articulated Coordinate Measuring Machine (FACMM),Monte-Claro simulation mothed is employed in the uncertainty evaluation of combined measurement systems. A case study is given to demonstrate the practical applications of this research.
Gamma Attribute Measurements - Pu300, Pu600, Pu900
International Nuclear Information System (INIS)
Gamma rays are ideal probes for the determination of information about the special nuclear material that is in the transparency regime. Gamma rays are good probes because they interact relatively weakly with the containers that surround the SNM under investigation. In addition, gamma rays carry a great deal of information about the material under investigation. We have leveraged these two characteristics to develop three technologies that have proven useful for the measurements of various attributes of plutonium. These technologies are Pu-300, Pu-600 and Pu-900. These technologies obtain the age, isotopics and presence/absence of oxide of a plutonium sample, respectively. Pu-300 obtains the time since the last 241Am separation for a sample of plutonium. This is accomplished by looking at the 241Am/241pu ratio in the energy region from 330-350 keV, hence the name Pu-300. Pu-600 determines the isotopics of the plutonium sample under consideration. More specifically, it determines the 240Pu/239Pu ratio to determine if the plutonium sample is of weapons quality or not. This analysis is carded out in the energy region from 630-670 keV. Pu-900 determines the absence of PuO2 by searching for a peak at 870.7 keV. If this peak is absent then there is no oxide in the sample. This peak arises from the de-excitation of the first excited state of 17O. The assumption being made is that this state is populated by means of the 17O(α,α') reaction. The first excited state of 17O could also be populated by means of the 14N(α,p) reaction, which might indicate that this is not a good signature for the absence of PuO2, however in the samples we have measured this peak is visible in oxide samples and is absent in other samples. In this paper we will discuss the physics details of these technologies and also present results of various measurements
Evaluating the uncertainty of input quantities in measurement models
Possolo, Antonio; Elster, Clemens
2014-06-01
The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.
2015-03-01
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).
Scepkowski, Lisa A; Wiegel, Markus; Bach, Amy K; Weisberg, Risa B; Brown, Timothy A; Barlow, David H
2004-12-01
This study investigated the attributional styles of men with and without sexual dysfunction for both positive and negative sexual and general events using a sex-specific version of the Attributional Style Questionnaire (Sex-ASQ), and ascertained the preliminary psychometric properties of the measure. The Sex-ASQ was created by embedding 8 hypothetical sexual events (4 positive, 4 negative) among the original 12 events in the Attributional Style Questionnaire (ASQ; C. Peterson, A. Semmel, C. von Baeyer, L. Y. Abramson, G. I. Metalsky, & M. E. Seligman, 1982). The Sex-ASQ was completed by 21 men with a principal DSM-IV diagnosis of Male Erectile Disorder (MED) and 32 male control participants. The psychometrics of the Sex-ASQ were satisfactory, but with the positive sexual event scales found to be less stable and internally consistent than the negative sexual event scales. Reasons for modest reliability of the positive event scales are discussed in terms of the original ASQ. As expected, men with MED did not differ significantly from men without sexual dysfunction in their causal attributions for general events, indicating that both groups exhibited an optimistic attributional style in general. Also as predicted, men with MED made more internal and stable causal attributions for negative sexual events than men without sexual dysfunction, and also rated negative sexual events as more important. For positive sexual events, the 2 groups did not differ in attributional style, with both groups making more external/unstable/specific causal attributions than for positive general events. Differences between explanatory style for sexual versus nonsexual events found in both sexually functional and dysfunctional men lend support for explanatory style models that propose both cross-situational consistency and situational specificity. PMID:15483370
Measurement uncertainty in colour characterization of printed textile materials
Directory of Open Access Journals (Sweden)
Neda Milić
2011-11-01
Full Text Available The subject of uncertainty of spectrophotometric measurement of printed textile materials is one of the majorunsolved technical problems in textile colourimetry today. Textile manufacturers are often trying to maintain colourdifference tolerances which are within the range or even less than the uncertainty of the measurement systemcontrolling them. In this paper, two commercial spectrophotometers with different measuring geometries (GretagMacbethEye-One Pro with 450/0° geometry and ChinSpec HP200 with d/8° geometry were comparativelyinvestigated in terms of measurement uncertainty in colour characterization of textile products. Results of the studyindicate that, the despite of different measuring geometry, instruments had the similar measurement repeatabilitybehaviour (repeatability of readings from different parts of the same sample in the case of used digitally printedpolyester materials. The important influence on measurement variability had the material preparation method (werethe materials triple folded, placed on a black backing or a white backing. On the other hand, instruments showeddifference concerning the inter-model agreement. Although this difference was not confirmed as significant withvisual assessment, observers evaluated the measurement readings from the Eye-One Pro spectrophotometer as moreaccurate colour appearance characterization of textile materials.
Uncertainty in measurement of protein circular dichroism spectra
Cox, Maurice G.; Ravi, Jascindra; Rakowska, Paulina D.; Knight, Alex E.
2014-02-01
Circular dichroism (CD) spectroscopy of proteins is widely used to measure protein secondary structure, and to detect changes in secondary and higher orders of structure, for applications in research and in the quality control of protein products such as biopharmaceuticals. However, objective comparison of spectra is challenging because of a limited quantitative understanding of the sources of error in the measurement. Statistical methods can be used for comparisons, but do not provide a mechanism for dealing with systematic, as well as random, errors. Here we present a measurement model for CD spectroscopy of proteins, incorporating the principal sources of uncertainty, and use the model in conjunction with experimental data to derive an uncertainty budget. We show how this approach could be used in practice for the objective comparison of spectra, and discuss the benefits and limitations of this strategy.
Measurement Of Beer Taste Attributes Using An Electronic Tongue
Polshin, Evgeny; Rudnitskaya, Alisa; Kirsanov, Dmitry; Lammertyn, Jeroen; Nicolaï, Bart; Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip; Legin, Andrey
2009-05-01
The present work deals with the results of the application of an electronic tongue system as an analytical tool for rapid assessment of beer flavour. Fifty samples of Belgian and Dutch beers of different types, characterized with respect to sensory properties and bitterness, were analyzed using the electronic tongue (ET) based on potentiometric chemical sensors. The ET was capable of predicting 10 sensory attributes of beer with good precision including sweetness, sourness, intensity, body, etc., as well as the most important instrumental parameter—bitterness. These results show a good promise for further progressing of the ET as a new analytical technique for the fast assessment of taste attributes and bitterness, in particular, in the food and brewery industries.
[Evaluation of uncertainty in measurement of radiated disturbance and analysis of the result].
Wang, Weiming; Jiang, Sui
2012-03-01
This paper evaluates the uncertainty in the measurement of radiated disturbance by analyzing and calculating the components that influence the uncertainty. And the effectiveness of the uncertainty testing has been confirmed through the ability validation.
Permissible limits for uncertainty of measurement in laboratory medicine.
Haeckel, Rainer; Wosniok, Werner; Gurr, Ebrhard; Peil, Burkhard
2015-07-01
The international standard ISO 15189 requires that medical laboratories estimate the uncertainty of their quantitative test results obtained from patients' specimens. The standard does not provide details how and within which limits the measurement uncertainty should be determined. The most common concept for establishing permissible uncertainty limits is to relate them on biological variation defining the rate of false positive results or to base the limits on the state-of-the-art. The state-of-the-art is usually derived from data provided by a group of selected medical laboratories. The approach on biological variation should be preferred because of its transparency and scientific base. Hitherto, all recommendations were based on a linear relationship between biological and analytical variation leading to limits which are sometimes too stringent or too permissive for routine testing in laboratory medicine. In contrast, the present proposal is based on a non-linear relationship between biological and analytical variation leading to more realistic limits. The proposed algorithms can be applied to all measurands and consider any quantity to be assured. The suggested approach tries to provide the above mentioned details and is a compromise between the biological variation concept, the GUM uncertainty model and the technical state-of-the-art.
Uncertainties and re-analysis of glacier mass balance measurements
Zemp, M.; E. Thibert; Huss, M.; Stumm, D.; Rolstad Denby, C.; Nuth, C.; S. U. Nussbaumer; G. Moholdt; A. Mercer; Mayer, C.; Joerg, P. C.; P. Jansson; B. Hynek; Fischer, A.; Escher-Vetter, H.
2013-01-01
Glacier-wide mass balance has been measured for more than sixty years and is widely used as an indicator of climate change and to assess the glacier contribution to runoff and sea level rise. Until present, comprehensive uncertainty assessments have rarely been carried out and mass balance data have often been applied using rough error estimation or without error considerations. In this study, we propose a framework for re-analyzing glacier mass balance series including conceptual and ...
Generalized uncertainty relations and efficient measurements in quantum systems
Belavkin, V. P.
2004-01-01
We consider two variants of a quantum-statistical generalization of the Cramer-Rao inequality that establishes an invariant lower bound on the mean square error of a generalized quantum measurement. The proposed complex variant of this inequality leads to a precise formulation of a generalized uncertainty principle for arbitrary states, in contrast to Helstrom's symmetric variant in which these relations are obtained only for pure states. A notion of canonical states is introduced and the low...
UNCERTAINTIES OF ANION AND TOC MEASUREMENTS AT THE DWPF LABORATORY
Energy Technology Data Exchange (ETDEWEB)
Edwards, T.
2011-04-07
The Savannah River Remediation (SRR) Defense Waste Processing Facility (DWPF) has identified a technical issue related to the amount of antifoam added to the Chemical Process Cell (CPC). Specifically, due to the long duration of the concentration and reflux cycles for the Sludge Receipt and Adjustment Tank (SRAT), additional antifoam has been required. The additional antifoam has been found to impact the melter flammability analysis as an additional source of carbon and hydrogen. To better understand and control the carbon and hydrogen contributors to the melter flammability analysis, SRR's Waste Solidification Engineering (WSE) has requested, via a Technical Task Request (TTR), that the Savannah River National Laboratory (SRNL) conduct an error evaluation of the measurements of key Slurry Mix Evaporator (SME) anions. SRNL issued a Task Technical and Quality Assurance Plan (TTQAP) [2] in response to that request, and the work reported here was conducted under the auspices of that TTQAP. The TTR instructs SRNL to conduct an error evaluation of anion measurements generated by the DWPF Laboratory using Ion Chromatography (IC) performed on SME samples. The anions of interest include nitrate, oxalate, and formate. Recent measurements of SME samples for these anions as well as measurements of total organic carbon (TOC) were provided to SRNL by DWPF Laboratory Operations (Lab OPS) personnel for this evaluation. This work was closely coordinated with the efforts of others within SRNL that are investigating the Chemical Process Cell (CPC) contributions to the melter flammability. The objective of that investigation was to develop a more comprehensive melter flammability control strategy that when implemented in DWPF will rely on process measurements. Accounting for the uncertainty of the measurements is necessary for successful implementation. The error evaluations conducted as part of this task will facilitate the integration of appropriate uncertainties for the
Measurement of nuclear activity with Ge detectors and its uncertainty
International Nuclear Information System (INIS)
The objective of this work is to analyse the influence magnitudes which affect the activity measurement of gamma transmitter isolated radioactive sources. They prepared by means of the gravimetric method, as well as, determining the uncertainty of such measurement when this is carried out with a gamma spectrometer system with a germanium detector. This work is developed in five chapters: In the first one, named Basic principles it is made a brief description about the meaning of the word Measurement and its implications and the necessaries concepts are presented which are used in this work. In the second chapter it is exposed the gravimetric method used for the manufacture of the gamma transmitter isolated radioactive sources, it is tackled the problem to determine the main influence magnitudes which affect in the measurement of their activity and the respective correction factors and their uncertainties are deduced. The third chapter describes the gamma spectrometry system which is used in this work for the measurement of the activity of isolated sources and also its performance and experimental arrangement that it is used. In the fourth chapter are applied the three previous items with the object of determining the uncertainty which would be obtained in the measurement of an isolated radioactive source elaborated with the gravimetric method in the experimental conditions less favourable predicted above the obtained results from the chapter two. The conclusions are presented in the fifth chapter and they are applied to establish the optimum conditions for the measurement of the activity of a gamma transmitter isolated radioactive source with a spectrometer with germanium detector. (Author)
Using measurement uncertainty in decision-making and conformity assessment
Pendrill, L. R.
2014-08-01
Measurements often provide an objective basis for making decisions, perhaps when assessing whether a product conforms to requirements or whether one set of measurements differs significantly from another. There is increasing appreciation of the need to account for the role of measurement uncertainty when making decisions, so that a ‘fit-for-purpose’ level of measurement effort can be set prior to performing a given task. Better mutual understanding between the metrologist and those ordering such tasks about the significance and limitations of the measurements when making decisions of conformance will be especially useful. Decisions of conformity are, however, currently made in many important application areas, such as when addressing the grand challenges (energy, health, etc), without a clear and harmonized basis for sharing the risks that arise from measurement uncertainty between the consumer, supplier and third parties. In reviewing, in this paper, the state of the art of the use of uncertainty evaluation in conformity assessment and decision-making, two aspects in particular—the handling of qualitative observations and of impact—are considered key to bringing more order to the present diverse rules of thumb of more or less arbitrary limits on measurement uncertainty and percentage risk in the field. (i) Decisions of conformity can be made on a more or less quantitative basis—referred in statistical acceptance sampling as by ‘variable’ or by ‘attribute’ (i.e. go/no-go decisions)—depending on the resources available or indeed whether a full quantitative judgment is needed or not. There is, therefore, an intimate relation between decision-making, relating objects to each other in terms of comparative or merely qualitative concepts, and nominal and ordinal properties. (ii) Adding measures of impact, such as the costs of incorrect decisions, can give more objective and more readily appreciated bases for decisions for all parties concerned. Such
Measurement Uncertainty Analysis of the Strain Gauge Based Stabilographic Platform
Directory of Open Access Journals (Sweden)
Walendziuk Wojciech
2014-08-01
Full Text Available The present article describes constructing a stabilographic platform which records a standing patient’s deflection from their point of balance. The constructed device is composed of a toughen glass slab propped with 4 force sensors. Power transducers are connected to the measurement system based on a 24-bit ADC transducer which acquires slight body movements of a patient. The data is then transferred to the computer in real time and data analysis is conducted. The article explains the principle of operation as well as the algorithm of measurement uncertainty for the COP (Centre of Pressure surface (x, y.
Uncertainty Estimation of Global Precipitation Measurement through Objective Validation Strategy
KIM, H.; Utsumi, N.; Seto, S.; Oki, T.
2014-12-01
Since Tropical Rainfall Measuring Mission (TRMM) has been launched in 1997 as the first satellite mission dedicated to measuring precipitation, the spatiotemporal gaps of precipitation observation have been filled significantly. On February 27th, 2014, Dual-frequency Precipitation Radar (DPR) satellite has been launched as a core observatory of Global Precipitation Measurement (GPM), an international multi-satellite mission aiming to provide the global three hourly map of rainfall and snowfall. In addition to Ku-band, Ka-band radar is newly equipped, and their combination is expected to introduce higher precision than the precipitation measurement of TRMM/PR. In this study, the GPM level-2 orbit products are evaluated comparing to various precipitation observations which include TRMM/PR, in-situ data, and ground radar. In the preliminary validation over intercross orbits of DPR and TRMM, Ku-band measurements in both satellites shows very close spatial pattern and intensity, and the DPR is capable to capture broader range of precipitation intensity than of the TRMM. Furthermore, we suggest a validation strategy based on 'objective classification' of background atmospheric mechanisms. The Japanese 55-year Reanalysis (JRA-55) and auxiliary datasets (e.g., tropical cyclone best track) is used to objectively determine the types of precipitation. Uncertainty of abovementioned precipitation products is quantified as their relative differences and characterized for different precipitation mechanism. Also, it is discussed how the uncertainty affects the synthesis of TRMM and GPM for a long-term satellite precipitation observation records which is internally consistent.
Research on uncertainty in measurement assisted alignment in aircraft assembly
Institute of Scientific and Technical Information of China (English)
Chen Zhehan; Du Fuzhou; Tang Xiaoqing
2013-01-01
Operations in assembling and joining large size aircraft components are changed to novel digital and flexible ways by digital measurement assisted alignment. Positions and orientations (P&O) of aligned components are critical characters which assure geometrical positions and rela-tionships of those components. Therefore, evaluating the P&O of a component is considered nec-essary and critical for ensuring accuracy in aircraft assembly. Uncertainty of position and orientation (U-P&O), as a part of the evaluating result of P&O, needs to be given for ensuring the integrity and credibility of the result; furthermore, U-P&O is necessary for error tracing and quality evaluating of measurement assisted aircraft assembly. However, current research mainly focuses on the process integration of measurement with assembly, and usually ignores the uncer-tainty of measured result and its influence on quality evaluation. This paper focuses on the expres-sion, analysis, and application of U-P&O in measurement assisted alignment. The geometrical and algebraical connotations of U-P&O are presented. Then, an analytical algorithm for evaluating the multi-dimensional U-P&O is given, and the effect factors and characteristics of U-P&O are dis-cussed. Finally, U-P&O is used to evaluate alignment in aircraft assembly for quality evaluating and improving. Cases are introduced with the methodology.
Romdhani, Fekria; Hennebelle, François; Ge, Min; Juillion, Patrick; Coquet, Richard; François Fontaine, Jean
2014-12-01
Articulated Arm Coordinate Measuring Machines (AACMMs) have gradually evolved and are increasingly used in mechanical industry. At present, measurement uncertainties relating to the use of these devices are not yet well quantified. The work carried out consists of determining the measurement uncertainties of a mechanical part by an AACMM. The studies aiming to develop a model of measurement uncertainty are based on the Monte Carlo method developed in Supplement 1 of the Guide to Expression of Uncertainty in Measurement [1] but also identifying and characterizing the main sources of uncertainty. A multi-level Monte Carlo approach principle has been developed which allows for characterizing the possible evolution of the AACMM during the measurement and quantifying in a second level the uncertainty on the considered measurand. The first Monte Carlo level is the most complex and is thus divided into three sub-levels, namely characterization on the positioning error of a point, estimation of calibration errors and evaluation of fluctuations of the ‘localization point’. The global method is thus presented and results of the first sub-level are particularly developed. The main sources of uncertainty, including AACMM deformations, are exposed.
Estimation of measuring uncertainty for optical micro-coordinate measuring machine
Institute of Scientific and Technical Information of China (English)
Kang Song(宋康); Zhuangde Jiang(蒋庄德)
2004-01-01
Based on the evaluation principle of the measuring uncertainty of the traditional coordinate measuring machine (CMM), the analysis and evaluation of the measuring uncertainty for optical micro-CMM have been made. Optical micro-CMM is an integrated measuring system with optical, mechanical, and electronic components, which may influence the measuring uncertainty of the optical micro-CMM. If the influence of laser speckle is taken into account, its longitudinal measuring uncertainty is 2.0 μm, otherwise it is 0.88 μm. It is proved that the estimation of the synthetic uncertainty for optical micro-CMM is correct and reliable by measuring the standard reference materials and simulating the influence of the diameter of laser beam. With Heisenberg's uncertainty principle and quantum mechanics theory, a method for improving the measuring accuracy of optical micro-CMM through adding a diaphragm in the receiving terminal of the light path was proposed, and the measuring results are verified by experiments.
Measurement of nuclear activity with Ge detectors and its uncertainty
Cortes, C A P
1999-01-01
presented in the fifth chapter and they are applied to establish the optimum conditions for the measurement of the activity of a gamma transmitter isolated radioactive source with a spectrometer with germanium detector. (Author) The objective of this work is to analyse the influence magnitudes which affect the activity measurement of gamma transmitter isolated radioactive sources. They prepared by means of the gravimetric method, as well as, determining the uncertainty of such measurement when this is carried out with a gamma spectrometer system with a germanium detector. This work is developed in five chapters: In the first one, named Basic principles it is made a brief description about the meaning of the word Measurement and its implications and the necessaries concepts are presented which are used in this work. In the second chapter it is exposed the gravimetric method used for the manufacture of the gamma transmitter isolated radioactive sources, it is tackled the problem to determine the main influence ...
Applying the Implicit Association Test to Measure Intolerance of Uncertainty.
Mosca, Oriana; Dentale, Francesco; Lauriola, Marco; Leone, Luigi
2016-08-01
Intolerance of Uncertainty (IU) is a key trans-diagnostic personality construct strongly associated with anxiety symptoms. Traditionally, IU is measured through self-report measures that are prone to bias effects due to impression management concerns and introspective difficulties. Moreover, self-report scales are not able to intercept the automatic associations that are assumed to be main determinants of several spontaneous responses (e.g., emotional reactions). In order to overcome these limitations, the Implicit Association Test (IAT) was applied to measure IU, with a particular focus on reliability and criterion validity issues. The IU-IAT and the Intolerance of Uncertainty Inventory (IUI) were administered to an undergraduate student sample (54 females and 10 males) with a mean age of 23 years (SD = 1.7). Successively, participants were asked to provide an individually chosen uncertain event from their own lives that may occur in the future and were requested to identify a number of potential negative consequences of it. Participants' responses in terms of cognitive thoughts (i.e., cognitive appraisal) and worry reactions toward these events were assessed using the two subscales of the Worry and Intolerance of Uncertainty Beliefs Questionnaire. The IU-IAT showed an adequate level of internal consistency and a not significant correlation with the IUI. A path analysis model, accounting for 35% of event-related worry, revealed that IUI had a significant indirect effect on the dependent variable through event-related IU thoughts. By contrast, as expected, IU-IAT predicted event-related worry independently from IU thoughts. In accordance with dual models of social cognition, these findings suggest that IU can influence event-related worry through two different processing pathways (automatic vs. deliberative), supporting the criterion and construct validity of the IU-IAT. The potential role of the IU-IAT for clinical applications was discussed. PMID:27451266
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Estimation of measurement uncertainty caused by surface gradient for a white light interferometer.
Liu, Mingyu; Cheung, Chi Fai; Ren, Mingjun; Cheng, Ching-Hsiang
2015-10-10
Although the scanning white light interferometer can provide measurement results with subnanometer resolution, the measurement accuracy is far from perfect. The surface roughness and surface gradient have significant influence on the measurement uncertainty since the corresponding height differences within a single CCD pixel cannot be resolved. This paper presents an uncertainty estimation method for estimating the measurement uncertainty due to the surface gradient of the workpiece. The method is developed based on the mathematical expression of an uncertainty estimation model which is derived and verified through a series of experiments. The results show that there is a notable similarity between the predicted uncertainty from the uncertainty estimation model and the experimental measurement uncertainty, which demonstrates the effectiveness of the method. With the establishment of the proposed uncertainty estimation method, the uncertainty associated with the measurement result can be determined conveniently.
Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance
Directory of Open Access Journals (Sweden)
Anna Svirina
2015-08-01
Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.
Hassler, B.; Petropavlovskikh, I.; Staehelin, J.; August, T.; Bhartia, P. K.; Clerbaux, C.; Degenstein, D.; Maziere, M. De; Dinelli, B. M.; Dudhia, A.; Dufour, G.; Frith, S. M.; Froidevaux, L.; Godin-Beekmann, S.; Granville, J.; Harris, N. R. P.; Hoppel, K.; Hubert, D.; Kasai, Y.; Kurylo, M. J.; Kyrola, E.; Lambert, J.-C.; Levelt, P. F.; McElroy, C. T.; McPeters, R. D.; Munro, R.; Nakajima, H.; Parrish, A.; Raspollini, P.; Remsberg, E. E.; Rosenlof, K. H.; Rozanov, A.; Sano, T.; Sasano, Y.; Shiotani, M.; Zawodny, J. M.
2014-01-01
Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical) and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP)/World Meteorological Organization (WMO) Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N) Initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground and satellite based) available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument). Archive location information for each data set is also given.
SI2N overview paper: ozone profile measurements: techniques, uncertainties and availability
Hassler, B.; Petropavlovskikh, I.; Staehelin, J.; August, T.; Bhartia, P. K.; Clerbaux, C.; Degenstein, D.; De Mazière, M.; Dinelli, B. M.; Dudhia, A.; Dufour, G.; Frith, S. M.; Froidevaux, L.; Godin-Beekmann, S.; Granville, J.; Harris, N. R. P.; Hoppel, K.; Hubert, D.; Kasai, Y.; Kurylo, M. J.; Kyrölä, E.; Lambert, J.-C.; Levelt, P. F.; McElroy, C. T.; McPeters, R. D.; Munro, R.; Nakajima, H.; Parrish, A.; Raspollini, P.; Remsberg, E. E.; Rosenlof, K. H.; Rozanov, A.; Sano, T.; Sasano, Y.; Shiotani, M.; Smit, H. G. J.; Stiller, G.; Tamminen, J.; Tarasick, D. W.; Urban, J.; van der A, R. J.; Veefkind, J. P.; Vigouroux, C.; von Clarmann, T.; von Savigny, C.; Walker, K. A.; Weber, M.; Wild, J.; Zawodny, J.
2013-11-01
Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical) and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP)/World Meteorological Organization (WMO) Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N) initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground- and satellite-based) available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument). Archive location information is for each data set is also given.
Hassler, B.; Petropavlovskikh, I.; Staehelin, J.; August, T.; Bhartia, P. K.; Clerbaux, C.; Degenstein, D.; De Mazière, M.; Dinelli, B. M.; Dudhia, A.; Dufour, G.; Frith, S. M.; Froidevaux, L.; Godin-Beekmann, S.; Granville, J.; Harris, N. R. P.; Hoppel, K.; Hubert, D.; Kasai, Y.; Kurylo, M. J.; Kyrölä, E.; Lambert, J.-C.; Levelt, P. F.; McElroy, C. T.; McPeters, R. D.; Munro, R.; Nakajima, H.; Parrish, A.; Raspollini, P.; Remsberg, E. E.; Rosenlof, K. H.; Rozanov, A.; Sano, T.; Sasano, Y.; Shiotani, M.; Smit, H. G. J.; Stiller, G.; Tamminen, J.; Tarasick, D. W.; Urban, J.; van der A, R. J.; Veefkind, J. P.; Vigouroux, C.; von Clarmann, T.; von Savigny, C.; Walker, K. A.; Weber, M.; Wild, J.; Zawodny, J. M.
2014-05-01
Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical) and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP)/World Meteorological Organization (WMO) Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N) Initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground and satellite based) available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument). Archive location information for each data set is also given.
Directory of Open Access Journals (Sweden)
B. Hassler
2014-05-01
Full Text Available Peak stratospheric chlorofluorocarbon (CFC and other ozone depleting substance (ODS concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP/World Meteorological Organization (WMO Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N Initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground and satellite based available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument. Archive location information for each data set is also given.
Range and number-of-levels effects in derived and stated measures of attribute importance
Verlegh, PWJ; Schifferstein, HNJ; Wittink, DR
2002-01-01
We study how the range of variation and the number of ttribute levels affect five measures of attribute importance: full profile conjoint estimates, ranges in attribute level attractiveness ratings. regression coefficients. graded paired comparisons. and self-reported ratings, We find that all impor
Directory of Open Access Journals (Sweden)
O'Connor Daniel P
2011-07-01
Full Text Available Background Physical activity (PA adoption is essential for obesity prevention and control, yet ethnic minority women report lower levels of PA and are at higher risk for obesity and its comorbidities compared to Caucasians. Epidemiological studies and ecologic models of health behavior suggest that built environmental factors are associated with health behaviors like PA, but few studies have examined the association between built environment attribute concordance and PA, and no known studies have examined attribute concordance and PA adoption. Purpose The purpose of this study was to associate the degree of concordance between directly and indirectly measured built environment attributes with changes in PA over time among African American and Hispanic Latina women participating in a PA intervention. Method Women (N = 410 completed measures of PA at Time 1 (T1 and Time 2 (T2; environmental data collected at T1 were used to compute concordance between directly and indirectly measured built environment attributes. The association between changes in PA and the degree of concordance between each directly and indirectly measured environmental attribute was assessed using repeated measures analyses. Results There were no significant associations between built environment attribute concordance values and change in self-reported or objectively measured PA. Self-reported PA significantly increased over time (F(1,184 = 7.82, p = .006, but this increase did not vary by ethnicity or any built environment attribute concordance variable. Conclusions Built environment attribute concordance may not be associated with PA changes over time among minority women. In an effort to promote PA, investigators should clarify specific built environment attributes that are important for PA adoption and whether accurate perceptions of these attributes are necessary, particularly among the vulnerable population of minority women.
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship
Uncertainty of measurement or of mean value for the reliable classification of contaminated land.
Boon, Katy A; Ramsey, Michael H
2010-12-15
Classification of contaminated land is important for risk assessment and so it is vital to understand and quantify all of the uncertainties that are involved in the assessment of contaminated land. This paper uses a case study to compare two methods for assessing the uncertainty in site investigations (uncertainty of individual measurements, including that from sampling, and uncertainty of the mean value of all measurements within an area) and how the different methods affect the decisions made about a site. Using the 'uncertainty of the mean value' there is shown to be no significant possibility of 'significant harm' under UK guidance at one particular test site, but if you consider the 'uncertainty of the measurements' a significant proportion (50%) of the site is shown to be possibly contaminated. This raises doubts as to whether the current method using 'uncertainty of the mean' is sufficiently robust, and suggests that 'uncertainty of measurement' information may be preferable, or at least beneficial when used in conjunction.
Tomlinson, M J
2016-09-01
This article suggests that diagnostic semen analysis has no more clinical value today than it had 25-30 years ago, and both the confusion surrounding its evidence base (in terms of relationship with conception) and the low level of confidence in the clinical setting is attributable to an associated high level of 'uncertainty'. Consideration of the concept of measurement uncertainty is mandatory for medical laboratories applying for the ISO15189 standard. It is evident that the entire semen analysis process is prone to error every step from specimen collection to the reporting of results and serves to compound uncertainty associated with diagnosis or prognosis. Perceived adherence to published guidelines for the assessment of sperm concentration, motility and morphology does not guarantee a reliable and reproducible test result. Moreover, the high level of uncertainty associated with manual sperm motility and morphology can be attributed to subjectivity and lack a traceable standard. This article describes where and why uncertainty exists and suggests that semen analysis will continue to be of limited value until it is more adequately considered and addressed. Although professional guidelines for good practice have provided the foundations for testing procedures for many years, the risk in following rather prescriptive guidance to the letter is that unless they are based on an overwhelmingly firm evidence base, the quality of semen analysis will remain poor and the progress towards the development of more innovative methods for investigating male infertility will be slow.
International Nuclear Information System (INIS)
The study realised includes several phases: the delimitation of the field of the study, the identification of the paramount parameters, the determination of the variations intervals of the paramount parameters, the analysis of the sensitivity and finally the analysis of uncertainty. (N.C.)
Including uncertainty in hazard analysis through fuzzy measures
International Nuclear Information System (INIS)
This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process
Definition of free form object for low uncertainty measurements on cooridnate measuring machines
DEFF Research Database (Denmark)
Savio, Enrico; De Chiffre, Leonardo
This report is made as a part of the project Easytrac, an EU project under the programme: Competitive and Sustainable Growth: Contract No: G6RD-CT-2000-00188, coordinated by UNIMETRIK S.A. (Spain). The project is concerned with low uncertainty calibrations on coordinate measuring machines....... The Centre for Geometrical Metrology (CGM) at the Technical University of Denmark takes care of free form measurements, in collaboration with DIMEG, University of Padova, Italy. The present report describes the free form objects selected for the investigations on the uncertainty assessment procedures....
Leff, Stephen S.; Lefler, Elizabeth K.; Khera, Gagan S.; Paskewich, Brooke; Jawad, Abbas F.
2014-01-01
The current study illustrates how researchers developed and validated a cartoon-based adaptation of a written hostile attributional bias measure for a sample of urban, low-income, African American boys. A series of studies were conducted to develop cartoon illustrations to accompany a standard written hostile attributional bias vignette measure (Study 1), to determine initial psychometric properties (Study 2) and acceptability (Study 3), and to conduct a test-retest reliability trial of the adapted measure in a separate sample (Study 4). These studies utilize a participatory action research approach to measurement design and adaptation, and suggest that collaborations between researchers and key school stakeholders can lead to measures that are psychometrically strong, developmentally appropriate, and culturally sensitive. In addition, the cartoon-based hostile attributional bias measure appears to have promise as an assessment and/or outcome measure for aggression and bullying prevention programs conducted with urban African American boys. PMID:21800228
Solecky, Eric; Archie, Chas; Sendelbach, Matthew; Fiege, Ron; Zaitz, Mary; Shneyder, Dmitriy; Strocchia-rivera, Carlos; Munoz, Andres; Rangarajan, Srinivasan; Muth, William; Brendler, Andrew; Banke, Bill; Schulz, Bernd; Hartig, Carsten; Hoeft, Jon-Tobias; Vaid, Alok; Kelling, Mark; Bunday, Benjamin; Allgair, John
2009-03-01
Ever shrinking measurement uncertainty requirements are difficult to achieve for a typical metrology toolset, especially over the entire expected life of the fleet. Many times, acceptable performance can be demonstrated during brief evaluation periods on a tool or two in the fleet. Over time and across the rest of the fleet, the most demanding processes often have measurement uncertainty concerns that prevent optimal process control, thereby limiting premium part yield, especially on the most aggressive technology nodes. Current metrology statistical process control (SPC) monitoring techniques focus on maintaining the performance of the fleet where toolset control chart limits are derived from a stable time period. These tools are prevented from measuring product when a statistical deviation is detected. Lastly, these charts are primarily concerned with daily fluctuations and do not consider the overall measurement uncertainty. It is possible that the control charts implemented for a given toolset suggest a healthy fleet while many of these demanding processes continue to suffer measurement uncertainty issues. This is especially true when extendibility is expected in a given generation of toolset. With this said, there is a need to continually improve the measurement uncertainty of the fleet until it can no longer meet the needed requirements at which point new technology needs to be entertained. This paper explores new methods in analyzing existing SPC monitor data to assess the measurement performance of the fleet and look for opportunities to drive improvements. Long term monitor data from a fleet of overlay and scatterometry tools will be analyzed. The paper also discusses using other methods besides SPC monitors to ensure the fleet stays matched; a set of SPC monitors provides a good baseline of fleet stability but it cannot represent all measurement scenarios happening in product recipes. The analyses presented deal with measurement uncertainty on non-measurement
Velocity Correction and Measurement Uncertainty Analysis of Light Screen Velocity Measuring Method
Institute of Scientific and Technical Information of China (English)
ZHENG Bin; ZUO Zhao-lu; HOU Wen
2012-01-01
Light screen velocity measuring method with unique advantages has been widely used in the velocity measurement of various moving bodies.For large air resistance and friction force which the big moving bodies are subjected to during the light screen velocity measuring,the principle of velocity correction was proposed and a velocity correction equation was derived.A light screen velocity measuring method was used to measure the velocity of big moving bodies which have complex velocity attenuation,and the better results were gained in practical tests.The measuring uncertainty after the velocity correction was calculated.
Dynamic risk measuring under model uncertainty: taking advantage of the hidden probability measure
Bion-Nadal, Jocelyne
2010-01-01
We study dynamic risk measures in a very general framework enabling to model uncertainty and processes with jumps. We previously showed the existence of a canonical equivalence class of probability measures hidden behind a given set of probability measures possibly non dominated. Taking advantage of this result, we exhibit a dual representation that completely characterizes the dynamic risk measure. We prove continuity and characterize time consistency. Then, we prove regularity for all processes associated to time consistent convex dynamic risk measures. We also study factorization through time for sublinear risk measures. Finally we consider examples (uncertain volatility and G-expectations).
International Nuclear Information System (INIS)
During a D and D or ER process containers of radioactive waste are normally generated. The activity can commonly be determined by gamma spectroscopy, but frequently the measurement conditions are not conducive to precise sample-detector geometries, and usually the radioactive material is not in a homogeneous distribution. What is the best method to accurately assay these containers - sampling followed by laboratory analysis, or in-situ spectroscopy? What is the uncertainty of the final result? To help answer these questions, the Canberra tool ISOCS Uncertainty Estimator [IUE] was used to mathematically simulate and evaluate several different measurement scenarios and to estimate the uncertainty of the measurement and the sampling process. Several representative containers and source distributions were mathematically defined and evaluated to determine the in-situ measurement uncertainty due to the sample non-uniformity. In the First example a typical field situation requiring the measurement of 200-liter drums was evaluated. A sensitivity analysis was done to show which parameters contributed the most to the uncertainty. Then an efficiency uncertainty calculation was performed. In the Second example, a group of 200-liter drums with various types of non-homogeneous distributions was created, and them measurements were simulated with different detector arrangements to see how the uncertainty varied. In the Third example, a truck filled with non-uniform soil was first measured with multiple in-situ detectors to determine the measurement uncertainty. Then composite samples were extracted and the sampling uncertainty computed for comparison to the field measurement uncertainty. (authors)
Guitar Chords Classification Using Uncertainty Measurements of Frequency Bins
Directory of Open Access Journals (Sweden)
Jesus Guerrero-Turrubiates
2015-01-01
Full Text Available This paper presents a method to perform chord classification from recorded audio. The signal harmonics are obtained by using the Fast Fourier Transform, and timbral information is suppressed by spectral whitening. A multiple fundamental frequency estimation of whitened data is achieved by adding attenuated harmonics by a weighting function. This paper proposes a method that performs feature selection by using a thresholding of the uncertainty of all frequency bins. Those measurements under the threshold are removed from the signal in the frequency domain. This allows a reduction of 95.53% of the signal characteristics, and the other 4.47% of frequency bins are used as enhanced information for the classifier. An Artificial Neural Network was utilized to classify four types of chords: major, minor, major 7th, and minor 7th. Those, played in the twelve musical notes, give a total of 48 different chords. Two reference methods (based on Hidden Markov Models were compared with the method proposed in this paper by having the same database for the evaluation test. In most of the performed tests, the proposed method achieved a reasonably high performance, with an accuracy of 93%.
Reconsideration of the Uncertainty Relations and Quantum Measurements
Directory of Open Access Journals (Sweden)
Dumitru S.
2008-04-01
Full Text Available Discussions on uncertainty relations (UR and quantum measurements (QMS persisted until nowadays in publications about quantum mechanics (QM. They originate mainly from the conventional interpretation of UR (CIUR. In the most of the QM literarure, it is underestimated the fact that, over the years, a lot of deficiencies regarding CIUR were signaled. As a rule the alluded deficiencies were remarked disparately and discussed as punctual and non-essential questions. Here we approach an investigation of the mentioned deficiencies collected in a conclusive ensemble. Subsequently we expose a reconsideration of the major problems referring to UR and QMS. We reveal that all the basic presumption of CIUR are troubled by insurmountable deficiencies which require the indubitable failure of CIUR and its necessary abandonment. Therefore the UR must be deprived of their statute of crucialpieces for physics. So, the aboriginal versions of UR appear as being in postures of either (i thought-experimental fictions or (ii simple QM formulae and, any other versions of them, have no connection with the QMS. Then the QMS must be viewed as an additional subject comparatively with the usual questions of QM. For a theoretical description of QMS we propose an information-transmission model, in which the quantum observables are considered as random variables. Our approach directs to natural solutions and simplifications for many problems regarding UR and QMS.
DEFF Research Database (Denmark)
Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo
2005-01-01
This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget and...... the substitution method based on measuring calibrated workpieces. Three holes with nominal diameter values in the range from 2 mm to 6 mm were measured on an optical CMM equipped with a CCD sensor and expanded measuring uncertainties were estimated to be in the range of 1-2 ìm....
Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method
Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko
2016-09-01
Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.
Measurement Uncertainty Budget of the PMV Thermal Comfort Equation
Ekici, Can
2016-05-01
Fanger's predicted mean vote (PMV) equation is the result of the combined quantitative effects of the air temperature, mean radiant temperature, air velocity, humidity activity level and clothing thermal resistance. PMV is a mathematical model of thermal comfort which was developed by Fanger. The uncertainty budget of the PMV equation was developed according to GUM in this study. An example is given for the uncertainty model of PMV in the exemplification section of the study. Sensitivity coefficients were derived from the PMV equation. Uncertainty budgets can be seen in the tables. A mathematical model of the sensitivity coefficients of Ta, hc, T_{mrt}, T_{cl}, and Pa is given in this study. And the uncertainty budgets for hc, T_{cl}, and Pa are given in this study.
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Energy Technology Data Exchange (ETDEWEB)
Jones, D.W.
2002-05-16
In previous reports, we have identified two potentially important issues, solutions to which would increase the attractiveness of DOE-developed technologies in commercial buildings energy systems. One issue concerns the fact that in addition to saving energy, many new technologies offer non-energy benefits that contribute to building productivity (firm profitability). The second issue is that new technologies are typically unproven in the eyes of decision makers and must bear risk premiums that offset cost advantages resulting from laboratory calculations. Even though a compelling case can be made for the importance of these issues, for building decision makers to incorporate them in business decisions and for DOE to use them in R&D program planning there must be robust empirical evidence of their existence and size. This paper investigates how such measurements could be made and offers recommendations as to preferred options. There is currently little systematic information on either of these concepts in the literature. Of the two there is somewhat more information on non-energy benefits, but little as regards office buildings. Office building productivity impacts can be observed casually, but must be estimated statistically, because buildings have many interacting attributes and observations based on direct behavior can easily confuse the process of attribution. For example, absenteeism can be easily observed. However, absenteeism may be down because a more healthy space conditioning system was put into place, because the weather was milder, or because firm policy regarding sick days had changed. There is also a general dearth of appropriate information for purposes of estimation. To overcome these difficulties, we propose developing a new data base and applying the technique of hedonic price analysis. This technique has been used extensively in the analysis of residential dwellings. There is also a literature on its application to commercial and industrial
Weight sensitivity measurement, analysis, and application in multi-attribute evaluation
Zhao, Yong; Huang, Chongyin; Chen, Yang
2013-11-01
Weights are used to measure relative importance of multiple attributes or objectives, which influence evaluation or decision results to a great degree. Thus, analyzing weight sensitivity is an important work for a multi-attribute evaluation or decision. A measuring method based on the inclined angle of two vectors is proposed in this paper in order to solve the weight sensitivity of a multi-attribute evaluation with isotonicity characteristic. This method uses the cosine of the inclined angle to measure the weight sensitivity based on preferences or preference combinations. Concepts of sensitivity space, degree, and angle are given, and the relevant measurement method is discussed and proved. Also, this method is used for the choice of the water environment protection projects in Heyuan City.
Fazzari, D M
2001-01-01
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...
The uncertainty in physical measurements an introduction to data analysis in the physics laboratory
Fornasini, Paolo
2008-01-01
All measurements of physical quantities are affected by uncertainty. Understanding the origin of uncertainty, evaluating its extent and suitably taking it into account in data analysis is essential for assessing the degree of accuracy of phenomenological relationships and physical laws in both scientific research and technological applications. The Uncertainty in Physical Measurements: An Introduction to Data Analysis in the Physics Laboratory presents an introduction to uncertainty and to some of the most common procedures of data analysis. This book will serve the reader well by filling the gap between tutorial textbooks and highly specialized monographs. The book is divided into three parts. The first part is a phenomenological introduction to measurement and uncertainty: properties of instruments, different causes and corresponding expressions of uncertainty, histograms and distributions, and unified expression of uncertainty. The second part contains an introduction to probability theory, random variable...
Evaluation of Measurement Uncertainty in Neutron Activation Analysis using Research Reactor
Energy Technology Data Exchange (ETDEWEB)
Chung, Y. S.; Moon, J. H.; Sun, G. M.; Kim, S. H.; Baek, S. Y.; Lim, J. M.; Lee, Y. N.; Kim, H. R
2007-02-15
This report was summarized a general and technical requirements, methods, results on the measurement uncertainty assessment for a maintenance of quality assurance and traceability which should be performed in NAA technique using the HANARO research reactor. It will be used as a basic information to support effectively an accredited analytical services in the future. That is, for the assessment of measurement uncertainty, environmental certified reference materials are used to apply the analytical results obtained from real experiment using ISO-GUM and Monte Carlo Simulation(MCS) methods. Firstly, standard uncertainty of predominant parameters in a NAA is evaluated for the measured values of elements quantitatively, and then combined uncertainty is calculated applying the rule of uncertainty propagation. In addition, the contribution of individual standard uncertainty for the combined uncertainty are estimated and the way for a minimization of them is reviewed.
Uncertainty of the beam energy measurement in the e+e- collision using Compton backscattering
Mo, Xiao-Hu
2014-10-01
The beam energy is measured in the e+e- collision by using Compton backscattering. The uncertainty of this measurement process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with the Monte Carlo sampling technique. These quantitative conclusions are especially important for understanding the uncertainty of the beam energy measurement system.
Radiation-induced statistical uncertainty in the threshold voltage measurement of MOSFET dosimeters
International Nuclear Information System (INIS)
The results of a recent study on the limiting uncertainties in the measurement of photon radiation dose with MOSFET dosimeters are reported. The statistical uncertainty in dose measurement from a single device has been measured before and after irradiation. The resulting increase in 1/f noise with radiation dose has been investigated via various analytical models. The limit of uncertainty in the ubiquitous linear trend of threshold voltage with dose has been measured and compared to two nonlinear models. Inter-device uncertainty has been investigated in a group of 40 devices, and preliminary evidence for kurtosis and skewness in the distributions for devices without external bias has been observed
Directory of Open Access Journals (Sweden)
Zhan Zhiqiang
2016-01-01
Full Text Available In digital modulation quality parameters traceability, the Error Vector Magnitude, Magnitude Error and Phase Error must be traced, and the measurement uncertainty of the above parameters needs to be assessed. Although the calibration specification JJF1128-2004 Calibration Specification for Vector Signal Analyzers is published domestically, the measurement uncertainty evaluation is unreasonable, the parameters selected is incorrect, and not all error terms are selected in measurement uncertainty evaluation. This article lists formula about magnitude error and phase error, than presents the measurement uncertainty evaluation processes for magnitude error and phase errors.
Directory of Open Access Journals (Sweden)
Ling Mingxiang
2014-12-01
Full Text Available Measurement uncertainty evaluation based on the Monte Carlo method (MCM with the assumption that all uncertainty sources are independent is common. For some measure problems, however, the correlation between input quantities is of great importance and even essential. The purpose of this paper is to provide an uncertainty evaluation method based on MCM that can handle correlated cases, especially for measurement in which uncertainty sources are correlated and submit to non-Gaussian distribution. In this method, a linear-nonlinear transformation technique was developed to generate correlated random variables sampling sequences with target prescribed marginal probability distribution and correlation coefficients. Measurement of the arm stretch of a precision centrifuge of 10-6 order was implemented by a high precision approach and associated uncertainty evaluation was carried out using the mentioned method and the method proposed in the Guide to the Expression of Uncertainty in Measurement (GUM. The obtained results were compared and discussed at last.
Groschl, Andreas; Gotz, Jurgen; Loderer, Andreas; Bills, Paul J.; Hausotte, Tino
2015-01-01
In verifying the tolerance specification and identifying the zone of conformity of a particular component an adequate determination of the task-related measurement uncertainty relevant to the utilized measurement method is required, in accordance with part one of the standard “Geometrical Product Specifications” as well as with the “Guide to the Expression of Uncertainty in Measurement”. Although, measurement uncertainty is a central subject in the field of metrology and is certainly consider...
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Dai, Y.;
2014-01-01
This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...... machines, can be applied to dimensional CT measurements. The method is based on repeated measurements carried out on a calibrated master piece. The master piece is a component of a dose engine from an insulin pen. Measurement uncertainties estimated from the repeated measurements of the master piece were...... transferred on to additionally scanned uncalibrated workpieces which provided the necessary link for achieving traceable measurements. © 2014 CIRP....
Directory of Open Access Journals (Sweden)
Přečková Petra
2012-04-01
Full Text Available Abstract Background Narrative medical reports do not use standardized terminology and often bring insufficient information for statistical processing and medical decision making. Objectives of the paper are to propose a method for measuring diversity in medical reports written in any language, to compare diversities in narrative and structured medical reports and to map attributes and terms to selected classification systems. Methods A new method based on a general concept of f-diversity is proposed for measuring diversity of medical reports in any language. The method is based on categorized attributes recorded in narrative or structured medical reports and on international classification systems. Values of categories are expressed by terms. Using SNOMED CT and ICD 10 we are mapping attributes and terms to predefined codes. We use f-diversities of Gini-Simpson and Number of Categories types to compare diversities of narrative and structured medical reports. The comparison is based on attributes selected from the Minimal Data Model for Cardiology (MDMC. Results We compared diversities of 110 Czech narrative medical reports and 1119 Czech structured medical reports. Selected categorized attributes of MDMC had mostly different numbers of categories and used different terms in narrative and structured reports. We found more than 60% of MDMC attributes in SNOMED CT. We showed that attributes in narrative medical reports had greater diversity than the same attributes in structured medical reports. Further, we replaced each value of category (term used for attributes in narrative medical reports by the closest term and the category used in MDMC for structured medical reports. We found that relative Gini-Simpson diversities in structured medical reports were significantly smaller than those in narrative medical reports except the "Allergy" attribute. Conclusions Terminology in narrative medical reports is not standardized. Therefore it is nearly
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
Energy Technology Data Exchange (ETDEWEB)
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for all equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate
The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...
Thibodeau, Michel A; Carleton, R Nicholas; McEvoy, Peter M; Zvolensky, Michael J; Brandt, Charles P; Boelen, Paul A; Mahoney, Alison E J; Deacon, Brett J; Asmundson, Gordon J G
2015-01-01
Intolerance of uncertainty (IU) is a construct of growing prominence in literature on anxiety disorders and major depressive disorder. Existing measures of IU do not define the uncertainty that respondents perceive as distressing. To address this limitation, we developed eight scales measuring disor
Aleskerov, F; Shvydun, S
2016-01-01
We propose a new method for assessing agents influence in network structures, which takes into consideration nodes attributes, individual and group influences of nodes, and the intensity of interactions. This approach helps us to identify both explicit and hidden central elements which cannot be detected by classical centrality measures or other indices.
The Attributive Theory of Quality: A Model for Quality Measurement in Higher Education.
Afshar, Arash
A theoretical basis for defining and measuring the quality of institutions of higher education, namely for accreditation purposes, is developed. The theory, the Attributive Theory of Quality, is illustrated using a calculation model that is based on general systems theory. The theory postulates that quality only exists in relation to the…
How Should Attributions Be Measured? A Reanalysis of Data from Elig and Frieze.
Maruyama, Geoffrey
1982-01-01
T.W. Elig and I.H. Frieze used a multitrait, multimethod approach to contrast three methods for measuring attributions: unstructured/open-ended, structured/unidimensional, and structured/ipsative. This paper reanalyzed their data using confirmatory factor analysis techniques. (Author/PN)
Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo
2015-06-01
Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method.
Uncertainty in Citizen Science observations: from measurement to user perception
Lahoz, William; Schneider, Philipp; Castell, Nuria
2016-04-01
Citizen Science activities concern general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. The advent of technologies such as the Internet and smartphones, and the growth in their usage, has significantly increased the potential benefits from Citizen Science activities. Citizen Science observations from low-cost sensors, smartphones and Citizen Observatories, provide a novel and recent development in platforms for observing the Earth System, with the opportunity to extend the range of observational platforms available to society to spatio-temporal scales (10-100s m; 1 hr or less) highly relevant to citizen needs. The potential value of Citizen Science is high, with applications in science, education, social aspects, and policy aspects, but this potential, particularly for citizens and policymakers, remains largely untapped. Key areas where Citizen Science data start to have demonstrable benefits include GEOSS Societal Benefit Areas such as Health and Weather. Citizen Science observations have many challenges, including simulation of smaller spatial scales, noisy data, combination with traditional observational methods (satellite and in situ data), and assessment, representation and visualization of uncertainty. Within these challenges, that of the assessment and representation of uncertainty and its communication to users is fundamental, as it provides qualitative and/or quantitative information that influences the belief users will have in environmental information. This presentation will discuss the challenges in assessment and representation of uncertainty in Citizen Science observations, its communication to users, including the use of visualization, and the perception of this uncertainty information by users of Citizen Science observations.
International Nuclear Information System (INIS)
This report describes the software development for the plutonium attribute verification system--AVNG. A brief synopsis of the technical solution for the measurement system is presented. The main tasks for the software development that is underway are formulated. The development tasks are shown in software structural flowcharts, measurement system state diagram and a description of the software. The current status of the AVNG software development is elucidated
International Nuclear Information System (INIS)
This report describes the software development for the plutonium attribute verification system - AVNG. A brief synopsis of the technical solution for the measurement system is presented. The main tasks for the software development that is underway are formulated. The development tasks are shown in software structural flowcharts, measurement system state diagram and a description of the software. The current status of the AVNG software development is elucidated.
Doubt-free uncertainty in measurement an introduction for engineers and students
Ratcliffe, Colin
2015-01-01
This volume presents measurement uncertainty and uncertainty budgets in a form accessible to practicing engineers and engineering students from across a wide range of disciplines. The book gives a detailed explanation of the methods presented by NIST in the “GUM” – Guide to Uncertainty of Measurement. Emphasis is placed on explaining the background and meaning of the topics, while keeping the level of mathematics at the minimum level necessary. Dr. Colin Ratcliffe, USNA, and Bridget Ratcliffe, Johns Hopkins, develop uncertainty budgets and explain their use. In some examples, the budget may show a process is already adequate and where costs can be saved. In other examples, the budget may show the process is inadequate and needs improvement. The book demonstrates how uncertainty budgets help identify the most cost effective place to make changes. In addition, an extensive fully-worked case study leads readers through all issues related to an uncertainty analysis, including a variety of different types of...
Real Graphs from Real Data: Experiencing the Concepts of Measurement and Uncertainty
Farmer, Stuart
2012-01-01
A simple activity using cheap and readily available materials is described that allows students to experience first hand many of the concepts of measurement, uncertainty and graph drawing without laborious measuring or calculation. (Contains 9 figures.)
Muelaner, J. E.; Wang, Z.; Keogh, P. S.; Brownell, J.; Fisher, D.
2016-11-01
Understanding the uncertainty of dimensional measurements for large products such as aircraft, spacecraft and wind turbines is fundamental to improving efficiency in these products. Much work has been done to ascertain the uncertainty associated with the main types of instruments used, based on laser tracking and photogrammetry, and the propagation of this uncertainty through networked measurements. Unfortunately this is not sufficient to understand the combined uncertainty of industrial measurements, which include secondary tooling and datum structures used to locate the coordinate frame. This paper presents for the first time a complete evaluation of the uncertainty of large scale industrial measurement processes. Generic analysis and design rules are proven through uncertainty evaluation and optimization for the measurement of a large aero gas turbine engine. This shows how the instrument uncertainty can be considered to be negligible. Before optimization the dominant source of uncertainty was the tooling design, after optimization the dominant source was thermal expansion of the engine; meaning that no further improvement can be made without measurement in a temperature controlled environment. These results will have a significant impact on the ability of aircraft and wind turbines to improve efficiency and therefore reduce carbon emissions, as well as the improved reliability of these products.
Sadef, Yumna; Poulsen, Tjalfe G; Bester, Kai
2014-05-01
Reductions in measurement uncertainty for organic micro-pollutant concentrations in full scale compost piles using comprehensive sampling and allowing equilibration time before sampling were quantified. Results showed that both application of a comprehensive sampling procedure (involving sample crushing) and allowing one week of equilibration time before sampling reduces measurement uncertainty by about 50%. Results further showed that for measurements carried out on samples collected using a comprehensive procedure, measurement uncertainty was associated exclusively with the analytic methods applied. Application of statistical analyses confirmed that these results were significant at the 95% confidence level. Overall implications of these results are (1) that it is possible to eliminate uncertainty associated with material inhomogeneity and (2) that in order to reduce uncertainty, sampling procedure is very important early in the composting process but less so later in the process.
J. K. Spiegel; Zieger, P.; Bukowiecki, N.; E. Hammer; Weingartner, E.; W. Eugster
2012-01-01
Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the evaluation of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100): first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of Mie theory. We deduced erro...
J. K. Spiegel; Zieger, P.; Bukowiecki, N.; E. Hammer; Weingartner, E.; W. Eugster
2012-01-01
Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the error analysis of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100): first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of the Mie theory. We dedu...
Biedermann, Eric; Jauriqui, Leanne; Aldrin, John C.; Mayes, Alexander; Williams, Tom; Mazdiyasni, Siamack
2016-02-01
Resonant Ultrasound Spectroscopy (RUS) is a nondestructive evaluation (NDE) method which can be used for material characterization, defect detection, process control and life monitoring for critical components in gas turbine engines, aircraft and other systems. Accurate forward and inverse modeling for RUS requires a proper accounting of the propagation of uncertainty due to the model and measurement sources. A process for quantifying the propagation of uncertainty to RUS frequency results for models and measurements was developed. Epistemic and aleatory sources of uncertainty were identified for forward model parameters, forward model material property and geometry inputs, inverse model parameters, and physical RUS measurements. RUS model parametric studies were then conducted for simple geometric samples to determine the sensitivity of RUS frequencies and model inversion results to the various sources of uncertainty. The results of these parametric studies were used to calculate uncertainty bounds associated with each source. Uncertainty bounds were then compared to assess the relative impact of the various sources of uncertainty, and mitigations were identified. The elastic material property inputs for forward models, such as Young's Modulus, were found to be the most significant source of uncertainty in these studies. The end result of this work was the development of an uncertainty quantification process that can be adapted to a broad range of components and materials.
Uncertainty of power curve measurement with a two-beam nacelle-mounted lidar
DEFF Research Database (Denmark)
Wagner, Rozenn; Courtney, Michael Stephen; Friis Pedersen, Troels;
2015-01-01
already been demonstrated to be suitable for use in power performance measurements. To be considered as a professional tool, however, power curve measurements performed using these instruments require traceable calibrated measurements and the quantification of the wind speed measurement uncertainty. Here...... lies between 1 and 2% for the wind speed range between cut-in and rated wind speed. Finally, the lidar was mounted on the nacelle of a wind turbine in order to perform a power curve measurement. The wind speed was simultaneously measured with a mast-top mounted cup anemometer placed two rotor diameters...... upwind of the turbine. The wind speed uncertainty related to the lidar tilting was calculated based on the tilt angle uncertainty derived from the inclinometer calibration and the deviation of the measurement height from hub height. The resulting combined uncertainty in the power curve using the nacelle...
Measuring the performance of sensors that report uncertainty
Martin, A D; Parry, M
2014-01-01
We provide methods to validate and compare sensor outputs, or inference algorithms applied to sensor data, by adapting statistical scoring rules. The reported output should either be in the form of a prediction interval or of a parameter estimate with corresponding uncertainty. Using knowledge of the `true' parameter values, scoring rules provide a method of ranking different sensors or algorithms for accuracy and precision. As an example, we apply the scoring rules to the inferred masses of cattle from ground force data and draw conclusions on which rules are most meaningful and in which way.
Directory of Open Access Journals (Sweden)
Yu Rao
2012-01-01
Full Text Available Liquid crystal thermography is an advanced nonintrusive measurement technique, which is capable of providing a high-accuracy continuous temperature field measurement, especially for a complex structured heat transfer surface. The first part of the paper presents a comprehensive introduction to the thermochromic liquid crystal material and the related liquid crystal thermography technique. Then, based on the aythors' experiences in using the liquid crystal thermography for the heat transfer measurement, the parameters affecting the measurement uncertainty of the liquid crystal thermography have been discussed in detail through an experimental study. The final part of the paper describes the applications of the steady and transient liquid crystal thermography technique in the study of the turbulent flow heat transfer related to the aeroengine turbine blade cooling.
Total error vs. measurement uncertainty: revolution or evolution?
Oosterhuis, Wytze P; Theodorsson, Elvar
2016-02-01
The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.
Realistic uncertainties on Hapke model parameters from photometric measurement
Schmidt, Frederic
2015-01-01
Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (2012, Icarus, 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b>0.5 and c>0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (2012) compilation generally are full Bidirectional Reflectance Diffusion Function (B...
论测量的不确定度%On uncertainty of measurement
Institute of Scientific and Technical Information of China (English)
孙建文
2012-01-01
简单介绍了＂测量不确定度＂的概念,结合相关规范提出了对测量不确定度的具体要求,阐述了如何确定测量不确定度,包括不确定度的来源识别,建立测量过程的模型,逐项评定标准不确定度等内容,以指导实践。%The paper introduces the concept of the uncertainty of the measurement,illustrates how to identify the uncertainty by combining relative regulation＇s requirement on the measurement uncertainty,including the origin identification of the uncertainty,the model for building the measurement process and the gradual evaluation standards of the uncertainty,so as to direct the practice.
Uncertainty analysis of steady state incident heat flux measurements in hydrocarbon fuel fires.
Energy Technology Data Exchange (ETDEWEB)
Nakos, James Thomas
2005-12-01
The objective of this report is to develop uncertainty estimates for three heat flux measurement techniques used for the measurement of incident heat flux in a combined radiative and convective environment. This is related to the measurement of heat flux to objects placed inside hydrocarbon fuel (diesel, JP-8 jet fuel) fires, which is very difficult to make accurately (e.g., less than 10%). Three methods will be discussed: a Schmidt-Boelter heat flux gage; a calorimeter and inverse heat conduction method; and a thin plate and energy balance method. Steady state uncertainties were estimated for two types of fires (i.e., calm wind and high winds) at three times (early in the fire, late in the fire, and at an intermediate time). Results showed a large uncertainty for all three methods. Typical uncertainties for a Schmidt-Boelter gage ranged from {+-}23% for high wind fires to {+-}39% for low wind fires. For the calorimeter/inverse method the uncertainties were {+-}25% to {+-}40%. The thin plate/energy balance method the uncertainties ranged from {+-}21% to {+-}42%. The 23-39% uncertainties for the Schmidt-Boelter gage are much larger than the quoted uncertainty for a radiative only environment (i.e ., {+-}3%). This large difference is due to the convective contribution and because the gage sensitivities to radiative and convective environments are not equal. All these values are larger than desired, which suggests the need for improvements in heat flux measurements in fires.
Measuring Research Data Uncertainty in the 2010 NRC Assessment of Geography Graduate Education
Shortridge, Ashton; Goldsberry, Kirk; Weessies, Kathleen
2011-01-01
This article characterizes and measures errors in the 2010 National Research Council (NRC) assessment of research-doctorate programs in geography. This article provides a conceptual model for data-based sources of uncertainty and reports on a quantitative assessment of NRC research data uncertainty for a particular geography doctoral program.…
International Nuclear Information System (INIS)
Within the frameworks of TO No.007 between ORNL and VNIIEF on Nuclear Materials Identification System (NMIS) mastering at VNIIEF in July 2000 there had been finalized joint measurements, in which NMIS-technique equipment was used that had been placed at VNIIEF's disposal by ORNL, as well as VNIIEF-produced unclassified samples of fissile materials. In the report there are presented results of experimental data preliminary processing to obtain absolute values of some attributes used in plutonium shells measurements: values of their mass and thickness. Possibility of fissile materials parameters absolute values obtaining from measurement data essentially widens NMIS applicability to the tasks relevant to these materials inspections
International Nuclear Information System (INIS)
An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency @ 1332 keV 50%) and digital gamma-ray spectrometer DSPECPLUS. The neutron multiplicity counter is a three ring counter with 164 3He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs.
International Nuclear Information System (INIS)
An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency (at) 1332 keV 50%) and digital gamma-ray spectrometer DSPECPLUS. The neutron multiplicity counter is a three ring counter with 164 3He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs
Non-commitment Entropy： A Noval Modalityfor Uncertainty Measurement
Institute of Scientific and Technical Information of China (English)
2015-01-01
Three-way decision rule is the extension of traditional two-way decision. In the real environment, a decision maker is not easy tomake choice between acceptance and rejection for the uncertainly or incomplete information. In this case, people used to choosethree-way decision for the uncertain and high risky decision with extra but necessary cost. Meanwhile some general uncertaintymeasures are proposed by generalizing Shannon＇s entropy. The theory of information entropy makes the uncertainty measuresmore accuracy in boundary of three-way decision. In this paper, we propose several types of non-commitment entropy by usingthe relation of the ＇third＇ decision-non-commitment, and employ the proposed model to evaluate the significance of theattributes for classification as well.
Measurement and Segmentation of College Students' Noncognitive Attributes: A Targeted Review
Ann E. Person; Scott E. Baumgartner; Kristin Hallgren; Betsy Santos
2014-01-01
This report presents findings from a targeted document review and expert interviews conducted as part of the Student Segmentation Initiative, which was funded by the Bill & Melinda Gates Foundationâ€™s Postsecondary Success strategy. The review addresses three questions relevant to the initiative: (1) What instruments and measures are available to assess postsecondary studentsâ€™ noncognitive attributes? (2) To what extent are these instruments used to classify or segment student populations?...
Realistic uncertainties on Hapke model parameters from photometric measurement
Schmidt, Frédéric; Fernando, Jennifer
2015-11-01
The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide
Alternative risk measure for decision-making under uncertainty in water management
Institute of Scientific and Technical Information of China (English)
Yueping Xu; YeouKoung Tung; Jia Li; Shaofeng Niu
2009-01-01
Taking into account uncertainties in water management remains a challenge due to social,economic and environmental changes.Often,uncertainty creates difficulty in ranking or comparing multiple water management options,possibly leading to a wrong decision.In this paper,an alternative risk measure is proposed to facilitate the ranking or comparison of water management options under uncertainty by using the concepts of conditional expected loss and partial mean.This measure has the advantages of being more intuitive,general and could relate to many other measures of risk in the literature.The application of the risk measure is demonstrated through a case study for the evaluation of flood mitigation projects.The results show that the new measure is applicable to a general decisionmaking process under uncertainty.
Role and Significance of Uncertainty in HV Measurement of Porcelain Insulators - a Case Study
Choudhary, Rahul Raj; Bhardwaj, Pooja; Dayama, Ravindra
The improved safety margins in complex systems have attained prime importance in the modern scientific environment. The analysis and implementation of complex systems demands the well quantified accuracy and capability of measurements. Careful measurement with properly identified and quantified uncertainties could lead to the actual discovery which further may contribute for social developments. Unfortunately most scientists and students are passively taught to ignore the possibility of definition problems in the field of measurement and are often source of great arguments. Identifying this issue, ISO has initiated the standardisation of methodologies but its Guide to the Expression of Uncertainty in Measurement (GUM) has yet to be adapted seriously in tertiary education institutions for understanding the concept of uncertainty. The paper has been focused for understanding the concepts of measurement and uncertainty. Further a case study for calculation and quantification of UOM for high voltage electrical testing of ceramic insulators has been explained.
Energy Technology Data Exchange (ETDEWEB)
Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)
2016-05-15
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)
Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education
International Nuclear Information System (INIS)
Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students
Computer-assisted uncertainty assessment of k0-NAA measurement results
Bučar, T.; Smodiš, B.
2008-10-01
In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.
Ralphs, Matthew I.; Smith, Barton L.; Roberts, Nicholas A.
2016-11-01
High thermal conductivity thermal interface materials (TIMs) are needed to extend the life and performance of electronic circuits. A stepped bar apparatus system has been shown to work well for thermal resistance measurements with rigid materials, but most TIMs are elastic. This work studies the uncertainty of using a stepped bar apparatus to measure the thermal resistance and a tensile/compression testing machine to estimate the compressed thickness of polydimethylsiloxane for a measurement on the thermal conductivity, k eff. An a priori, zeroth order analysis is used to estimate the random uncertainty from the instrumentation; a first order analysis is used to estimate the statistical variation in samples; and an a posteriori, Nth order analysis is used to provide an overall uncertainty on k eff for this measurement method. Bias uncertainty in the thermocouples is found to be the largest single source of uncertainty. The a posteriori uncertainty of the proposed method is 6.5% relative uncertainty (68% confidence), but could be reduced through calibration and correlated biases in the temperature measurements.
The MapCHECK Measurement Uncertainty function and its effect on planar dose pass rates.
Bailey, Daniel W; Spaans, Jason D; Kumaraswamy, Lalith K; Podgorsak, Matthew B
2016-03-08
Our study aimed to quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as measured and analyzed with the Sun Nuclear Corporation MapCHECK 2 array and its associated software. This optional function is toggled in the program preferences of the software (though turned on by default upon installation), and automatically increases the dose difference tolerance defined by the user for each planar dose comparison. Dose planes from 109 static-gantry IMRT fields and 40 VMAT arcs, of varying modulation complexity, were measured at 5 cm water-equivalent depth in the MapCHECK 2 diode array, and respective calculated dose planes were exported from a commercial treatment planning system. Planar dose comparison pass rates were calculated within the Sun Nuclear Corporation analytic software using a number of calculation parameters, including Measurement Uncertainty on and off. By varying the percent difference (%Diff) criterion for similar analyses performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with Measurement Uncertainty turned on. On average, the Measurement Uncertainty function increases the user-defined %Diff criterion by 0.8%-1.1% for 3%/3 mm analysis, depending on plan type and calculation technique (corresponding to an average change in pass rate of 1.0%-3.5%, and a maximum change of 8.7%). At the 2%/2 mm level, the Measurement Uncertainty function increases the user-defined %Diff criterion by 0.7%-1.2% on average, again depending on plan type and calculation technique (corresponding to an average change in pass rate of 3.5%-8.1%, and a maximum change of 14.2%). The largest increases in pass rate due to the Measurement Uncertainty function are generally seen with poorly matched planar dose comparisons, while the function has a notably smaller effect as pass rates approach 100%. The Measurement Uncertainty function, then, may
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
The grey relational approach for evaluating measurement uncertainty with poor information
International Nuclear Information System (INIS)
The Guide to the Expression of Uncertainty in Measurement (GUM) is the master document for measurement uncertainty evaluation. However, the GUM may encounter problems and does not work well when the measurement data have poor information. In most cases, poor information means a small data sample and an unknown probability distribution. In these cases, the evaluation of measurement uncertainty has become a bottleneck in practical measurement. To solve this problem, a novel method called the grey relational approach (GRA), different from the statistical theory, is proposed in this paper. The GRA does not require a large sample size or probability distribution information of the measurement data. Mathematically, the GRA can be divided into three parts. Firstly, according to grey relational analysis, the grey relational coefficients between the ideal and the practical measurement output series are obtained. Secondly, the weighted coefficients and the measurement expectation function will be acquired based on the grey relational coefficients. Finally, the measurement uncertainty is evaluated based on grey modeling. In order to validate the performance of this method, simulation experiments were performed and the evaluation results show that the GRA can keep the average error around 5%. Besides, the GRA was also compared with the grey method, the Bessel method, and the Monte Carlo method by a real stress measurement. Both the simulation experiments and real measurement show that the GRA is appropriate and effective to evaluate the measurement uncertainty with poor information. (paper)
The grey relational approach for evaluating measurement uncertainty with poor information
Luo, Zai; Wang, Yanqing; Zhou, Weihu; Wang, Zhongyu
2015-12-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the master document for measurement uncertainty evaluation. However, the GUM may encounter problems and does not work well when the measurement data have poor information. In most cases, poor information means a small data sample and an unknown probability distribution. In these cases, the evaluation of measurement uncertainty has become a bottleneck in practical measurement. To solve this problem, a novel method called the grey relational approach (GRA), different from the statistical theory, is proposed in this paper. The GRA does not require a large sample size or probability distribution information of the measurement data. Mathematically, the GRA can be divided into three parts. Firstly, according to grey relational analysis, the grey relational coefficients between the ideal and the practical measurement output series are obtained. Secondly, the weighted coefficients and the measurement expectation function will be acquired based on the grey relational coefficients. Finally, the measurement uncertainty is evaluated based on grey modeling. In order to validate the performance of this method, simulation experiments were performed and the evaluation results show that the GRA can keep the average error around 5%. Besides, the GRA was also compared with the grey method, the Bessel method, and the Monte Carlo method by a real stress measurement. Both the simulation experiments and real measurement show that the GRA is appropriate and effective to evaluate the measurement uncertainty with poor information.
Working with Error and Uncertainty to Increase Measurement Validity
Amrein-Beardsley, Audrey; Barnett, Joshua H.
2012-01-01
Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…
Truth Control of Dublicate Measurings under Uncertainty Conditions
Directory of Open Access Journals (Sweden)
V. A. Anischenko
2010-01-01
Full Text Available The paper considers a problem pertaining to truth control of dublicate measurings of technological variables under conditions of data deficit on characteristics of measuring facilities and controlled variables.The proposed control method improves a probability to detect and identify untrue dublicate measurings.
Gregory, Kent J; Pattison, John E; Bibbo, Giovanni
2015-03-01
The minimal dose covering 90 % of the prostate volume--D 90--is arguably the most important dosimetric parameter in low-dose-rate prostate seed brachytherapy. In this study an analysis of the measurement uncertainties in D 90 from low-dose-rate prostate seed brachytherapy was conducted for two common treatment procedures with two different post-implant dosimetry methods. The analysis was undertaken in order to determine the magnitude of D 90 uncertainty, how the magnitude of the uncertainty varied when D 90 was calculated using different dosimetry methods, and which factors were the major contributors to the uncertainty. The analysis considered the prostate as being homogeneous and tissue equivalent and made use of published data, as well as original data collected specifically for this analysis, and was performed according to the Guide to the expression of uncertainty in measurement (GUM). It was found that when prostate imaging and seed implantation were conducted in two separate sessions using only CT images for post-implant analysis, the expanded uncertainty in D 90 values were about 25 % at the 95 % confidence interval. When prostate imaging and seed implantation were conducted during a single session using CT and ultrasound images for post-implant analysis, the expanded uncertainty in D 90 values were about 33 %. Methods for reducing these uncertainty levels are discussed. It was found that variations in contouring the target tissue made the largest contribution to D 90 uncertainty, while the uncertainty in seed source strength made only a small contribution. It is important that clinicians appreciate the overall magnitude of D 90 uncertainty and understand the factors that affect it so that clinical decisions are soundly based, and resources are appropriately allocated.
Sklerov, Jason H; Couper, Fiona J
2011-09-01
An estimate was made of the measurement uncertainty for blood ethanol testing by headspace gas chromatography. While uncertainty often focuses on compliance to a single threshold level (0.08 g/100 mL), the existence of multiple thresholds, related to enhanced sentencing, subject age, or commercial vehicle licensure, necessitate the use of an estimate with validity across multiple specification levels. The uncertainty sources, in order of decreasing magnitude, were method reproducibility, linear calibration, recovery, calibrator preparation, reference material, and sample preparation. A large set of reproducibility data was evaluated (n = 15,433) in order to encompass measurement variability across multiple conditions, operators, instruments, concentrations and timeframes. The relative, combined standard uncertainty was calculated as ±2.7%, with an expanded uncertainty of ±8.2% (99.7% level of confidence, k = 3). Bias was separately evaluated through a recovery study using standard reference material from a national metrology institute. The uncertainty estimate was verified through the use of proficiency test (PT) results. Assigned values for PT results and their associated uncertainties were calculated as robust means (x*) and standard deviations (s*) of participant values. Performance scores demonstrated that the uncertainty estimate was appropriate across the full range of PT concentrations (0.010-0.370 g/100 mL). The use of PT data as an empirical estimate of uncertainty was not examined. Until providers of blood ethanol PT samples include details on how an assigned value is obtained along with its uncertainty and traceability, the use of PT data should be restricted to the role of verification of uncertainty estimates.
Comparison of model predictions with measurements using the improved spent fuel attribute tester
International Nuclear Information System (INIS)
Design improvements for the International Atomic Energy Agency's Spent Fuel Attribute Tester, recommended on the basis of an optimization study, were incorporated into a new instrument fabricated under the Finnish Support Programme. The new instrument was tested at a spent fuel storage pool on September 8 and 9, 1993. The result of two of the measurements have been compared with calculations. In both cases the calculated and measured pulse height spectra in good agreement and the 137Cs gamma peak signature from the target spent fuel element is present
Xue, Zhenyu; Vlachos, Pavlos P
2014-01-01
In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations. In addition, the notion of a valid measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct ...
Directory of Open Access Journals (Sweden)
Dieisson Pivoto
2016-04-01
Full Text Available ABSTRACT: The study aimed to i quantify the measurement uncertainty in the physical tests of rice and beans for a hypothetical defect, ii verify whether homogenization and sample reduction in the physical classification tests of rice and beans is effective to reduce the measurement uncertainty of the process and iii determine whether the increase in size of beans sample increases accuracy and reduces measurement uncertainty in a significant way. Hypothetical defects in rice and beans with different damage levels were simulated according to the testing methodology determined by the Normative Ruling of each product. The homogenization and sample reduction in the physical classification of rice and beans are not effective, transferring to the final test result a high measurement uncertainty. The sample size indicated by the Normative Ruling did not allow an appropriate homogenization and should be increased.
International Nuclear Information System (INIS)
Although the detection techniques used for measuring classified materials are very similar to those used in unclassified measurements, the surrounding packaging is generally very different. If iZ classified item is to be measured, an information barrier is required to protect any classified data acquired. This information barrier must protect the classified information while giving the inspector confidence that the unclassified outputs accurately reflect the classified inputs, Both information barrier and authentication considerations must be considered during all phases of system design and fabrication. One example of such a measurement system is the attribute measurement system (termed the AVNG) designed for the: Trilateral Initiative. We will discuss the integration of information barrier components into this system as well as the effects of an information barrier (including authentication) concerns on the implementation of the detector systems.
Validity of Willingness to Pay Measures under Preference Uncertainty.
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method. PMID:27096163
On the Uncertainties of Stellar Mass Estimates via Colour Measurements
Roediger, Joel C
2015-01-01
Mass-to-light versus colour relations (MLCRs), derived from stellar population synthesis models, are widely used to estimate galaxy stellar masses (M$_*$) yet a detailed investigation of their inherent biases and limitations is still lacking. We quantify several potential sources of uncertainty, using optical and near-infrared (NIR) photometry for a representative sample of nearby galaxies from the Virgo cluster. Our method for combining multi-band photometry with MLCRs yields robust stellar masses, while errors in M$_*$ decrease as more bands are simultaneously considered. The prior assumptions in one's stellar population modelling dominate the error budget, creating a colour-dependent bias of up to 0.6 dex if NIR fluxes are used (0.3 dex otherwise). This matches the systematic errors associated with the method of spectral energy distribution (SED) fitting, indicating that MLCRs do not suffer from much additional bias. Moreover, MLCRs and SED fitting yield similar degrees of random error ($\\sim$0.1-0.14 dex)...
High speed railway environment safety evaluation based on measurement attribute recognition model.
Hu, Qizhou; Gao, Ningbo; Zhang, Bing
2014-01-01
In order to rationally evaluate the high speed railway operation safety level, the environmental safety evaluation index system of high speed railway should be well established by means of analyzing the impact mechanism of severe weather such as raining, thundering, lightning, earthquake, winding, and snowing. In addition to that, the attribute recognition will be identified to determine the similarity between samples and their corresponding attribute classes on the multidimensional space, which is on the basis of the Mahalanobis distance measurement function in terms of Mahalanobis distance with the characteristics of noncorrelation and nondimensionless influence. On top of the assumption, the high speed railway of China environment safety situation will be well elaborated by the suggested methods. The results from the detailed analysis show that the evaluation is basically matched up with the actual situation and could lay a scientific foundation for the high speed railway operation safety.
The CSGU: a measure of controllability, stability, globality, and universality attributions.
Coffee, Pete; Rees, Tim
2008-10-01
This article reports initial evidence of construct validity for a four-factor measure of attributions assessing the dimensions of controllability, stability, globality, and universality (the CSGU). In Study 1, using confirmatory factor analysis, factors were confirmed across least successful and most successful conditions. In Study 2, following less successful performances, correlations supported hypothesized relationships between subscales of the CSGU and subscales of the CDSII (McAuley, Duncan, & Russell, 1992). In Study 3, following less successful performances, moderated hierarchical regression analyses demonstrated that individuals have higher subsequent self-efficacy when they perceive causes of performance as controllable, and/or specific, and/or universal. An interaction for controllability and stability demonstrated that if causes are perceived as likely to recur, it is important to perceive that causes are controllable. Researchers are encouraged to use the CSGU to examine main and interactive effects of controllability and generalizability attributions upon outcomes such as self-efficacy, emotions, and performance. PMID:18971514
High Speed Railway Environment Safety Evaluation Based on Measurement Attribute Recognition Model
Directory of Open Access Journals (Sweden)
Qizhou Hu
2014-01-01
Full Text Available In order to rationally evaluate the high speed railway operation safety level, the environmental safety evaluation index system of high speed railway should be well established by means of analyzing the impact mechanism of severe weather such as raining, thundering, lightning, earthquake, winding, and snowing. In addition to that, the attribute recognition will be identified to determine the similarity between samples and their corresponding attribute classes on the multidimensional space, which is on the basis of the Mahalanobis distance measurement function in terms of Mahalanobis distance with the characteristics of noncorrelation and nondimensionless influence. On top of the assumption, the high speed railway of China environment safety situation will be well elaborated by the suggested methods. The results from the detailed analysis show that the evaluation is basically matched up with the actual situation and could lay a scientific foundation for the high speed railway operation safety.
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
International Nuclear Information System (INIS)
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Uncertainty of nitrate and sulphate measured by ion chromatography in wastewater samples
Tepuš, Brigita; Simonič, Marjana
2012-01-01
This paper presents an evaluation of measurement uncertainty regarding the results of anion (nitrate and sulphate) concentrations in wastewater. Anions were determined by ion chromatography (EN ISO 10304-2, 1996). The major sources of uncertainty regarding the measurement results were identified as contributions to linear least-square or weighted regression lines, precision, trueness, storage conditions, and sampling. Determination of anions in wastewater is very important for the purificatio...
UNCERTAINTY AND ITS IMPACT ON THE QUALITY OF MEASUREMENT
Adel Elahdi M. Yahya; Martin Halaj
2012-01-01
The imposition of practice, the current world, the laboratory measurement, calibration should be approved by points of credit to national or international and should be compatible with the requirements specification (ISO 17025) for the adoption of efficient laboratories. Those requirements were included the testing process or scale limits to doubt that mentioned in the measurement certificate, which recognizes the customer to achieve quality and efficiency in the process of measurement. In th...
Uncertainties Associated with Flux Measurements Due to Heterogeneous Contaminant Distributions
Mass flux and mass discharge measurements at contaminated sites have been applied to assist with remedial management, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Extrapolation across un-sampled space is necessary when usi...
Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang
2016-09-01
The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| {Corr}_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.
Spank, Uwe; Schwärzel, Kai; Renner, Maik; Bernhofer, Christian
2013-04-01
Numerical water balance models are widely used in ecological and hydro sciences. However, their application is related to specific problems and uncertainties. The reliability of model prediction depends on (i) model concept, (ii) parameters, (iii) uncertainty of input data, and (iv) uncertainty of reference data. How model concept (i) and parameters (ii) affect the model performance is an often treated problem. On the contrary, the effects of (iii) and (iv) are seldom tackled although their effects are of similar magnitude. It should be considered that uncertainties of input data and reference data do not only affect the prediction accuracy but also the parameter identification (calibration and validation). The uncertainty has two different reasons: (a) actual measurement uncertainties and (b) limitations of representativeness as consequence of a scale gap between meteorological measurement and hydrological modelling. A separate analysis of both aspects is often not possible as most hydrological investigations operate on catchment scale where both effects interfere with each other. Our study is focused on site scale (neglected. At site scale we take the micrometeorological perspective: primary reference is the evapotranspiration measured via the eddy covariance technique instead of runoff. Because of the use of evapotranspiration as a reference, it is possible to limit the investigations to the upper parts of the soil that are influenced by root water uptake. Thus, also the parameter uncertainty is significantly reduced as most parameters can be directly quantified. The analyses of effects due to input uncertainties are based on Monte-Carlo-Simulations with perturbated input series. The Monte-Carlo-Simulations were done for two water balance models of different complexity (HPTFs: black box model; BROOK90: process based complex model) and for different sets of parameterisation. Our results show that seemingly small uncertainties in daily measurements can lead to
Measuring Young’s modulus the easy way, and tracing the effects of measurement uncertainties
Nunn, John
2015-09-01
The speed of sound in a solid is determined by the density and elasticity of the material. Young’s modulus can therefore be calculated once the density and the speed of sound in the solid are measured. The density can be measured relatively easily, and the speed of sound through a rod can be measured very inexpensively by setting up a longitudinal standing wave and using a microphone to record its frequency. This is a simplified version of a technique called ‘impulse excitation’. It is a good educational technique for school pupils. This paper includes the description and the free provision of custom software to calculate the frequency spectrum of a recorded sound so that the resonant peaks can be readily identified. Discussion on the effect of measurement uncertainties is included to help the more thorough experimental student improve the accuracy of his method. The technique is sensitive enough to be able to detect changes in the elasticity modulus with a temperature change of just a few degrees.
Institute of Scientific and Technical Information of China (English)
郭凯红; 李文立
2012-01-01
The previous study shows that the evidential reasoning algorithm is an effective and rational method to solve MADM (Multiple Attribute Decision Making) problems under uncertainty. However, the method has constraints that attribute weights should be deterministic and evaluation grades assessing basic attributes and general attributes should be consistent. However, these constraints are not relevant to the actual decision-making problems, especially for basic qualitative attributes. Existing subjective and objective methods have defect for basic attribute weights. Most methods assume that the grade is the same in order to evaluate grades based on basic and general attributes. Therefore, these methods are not effective to assist the decision making process and solve problems.In consideration of the weakness of previous study, this study proposes a method based on the evidential reasoning for MADM under uncertainty with the goal of extending evidential reasoning algorithm into a more general decision environment.The first part is to determine basic attribute weights. We first briefly introduce the evidential reasoning algorithm, discussing two major issues related to its effective application for MADM under uncertainty: (1) how to totally determine basic attribute weights, and (2) how to fully implement the transformation of distributed assessment from basic attributes into general attributes. In addition, we calculate basic attribute weights using the information entropy of decision matrix to solve the first problem. In the second part, we implement the equivalent transformation of distributed assessments from basic attributes into general attributes by assuming that evaluation grades assessing basic attributes and general attributes are not the same.We first fuzz the distributed assessments of basic attributes according to different data types of basic attribute values, and then implement, based on fuzzy transformation theory, the unified form of general distributed
DEFF Research Database (Denmark)
Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio;
2014-01-01
Industrial applications of computed tomography (CT) for dimensional metrology on various components are fast increasing, owing to a number of favorable properties such as capability of non-destructive internal measurements. Uncertainty evaluation is however more complex than in conventional...... measurement processes, e.g., with tactile systems, also due to factors related to systematic errors, mainly caused by specific CT image characteristics. In this paper we propose a simulation-based framework for measurement uncertainty evaluation in dimensional CT using the bootstrap method. In a case study...... the problem concerning measurement uncertainties was addressed with bootstrap and successfully applied to ball-bar CT measurements. Results obtained enabled extension to more complex shapes such as actual industrial components as we show by tests on a hollow cylinder workpiece....
Arpaia, P; Lucariello, G; Spiezia, G
2007-01-01
At European Centre of Nuclear Research (CERN), within the new Large Hadron Collider (LHC) project, measurements of magnetic flux with uncertainty of 10 ppm at a few of decades of Hz for several minutes are required. With this aim, a new Fast Digital Integrator (FDI) has been developed in cooperation with University of Sannio, Italy [1]. This paper deals with the final design tuning for achieving target uncertainty by means of experimental statistical parameter design.
International Nuclear Information System (INIS)
The attribute measurement technique provides a method for determining whether or not an item containing special nuclear material (SNM) possesses attributes that fall within an agreed upon range of values. One potential attribute is whether the mass of an SNM item is larger than some threshold value that has been negotiated as part of a nonproliferation treaty. While the historical focus on measuring mass attributes has been on using neutron measurements, calorimetry measurements may be a viable alternative for measuring mass attributes for plutonium-bearing items. Traditionally, calorimetry measurements have provided a highly precise and accurate determination of the thermal power that is being generated by an item. In order to achieve this high level of precision and accuracy, the item must reach thermal equilibrium inside the calorimeter prior to determining the thermal power of the item. Because the approach to thermal equilibrium is exponential in nature, a large portion of the time spent approaching equilibrium is spent with the measurement being within ∼10% of its final equilibrium value inside the calorimeter. Since a mass attribute measurement only needs to positively determine if the mass of a given SNM item is greater than a threshold value, performing a short calorimetry measurement to determine how the system is approaching thermal equilibrium may provide sufficient information to determine if an item has a larger mass than the agreed upon threshold. In previous research into a fast calorimetry attribute technique, a two-dimensional heat flow model of a calorimeter was used to investigate the possibility of determining a mass attribute for plutonium-bearing items using this technique. While the results of this study looked favorable for developing a fast calorimetry attribute technique, additional work was needed to determine the accuracy of the model used to make the calculations. In this paper, the results from the current work investigating the
Energy Technology Data Exchange (ETDEWEB)
Hauck, Danielle K [Los Alamos National Laboratory; Bracken, David S [Los Alamos National Laboratory; Mac Arthur, Duncan W [Los Alamos National Laboratory; Santi, Peter A [Los Alamos National Laboratory; Thron, Jonathan [Los Alamos National Laboratory
2010-01-01
The attribute measurement technique provides a method for determining whether or not an item containing special nuclear material (SNM) possesses attributes that fall within an agreed upon range of values. One potential attribute is whether the mass of an SNM item is larger than some threshold value that has been negotiated as part of a nonproliferation treaty. While the historical focus on measuring mass attributes has been on using neutron measurements, calorimetry measurements may be a viable alternative for measuring mass attributes for plutonium-bearing items. Traditionally, calorimetry measurements have provided a highly precise and accurate determination of the thermal power that is being generated by an item. In order to achieve this high level of precision and accuracy, the item must reach thermal equilibrium inside the calorimeter prior to determining the thermal power of the item. Because the approach to thermal equilibrium is exponential in nature, a large portion of the time spent approaching equilibrium is spent with the measurement being within {approx}10% of its final equilibrium value inside the calorimeter. Since a mass attribute measurement only needs to positively determine if the mass of a given SNM item is greater than a threshold value, performing a short calorimetry measurement to determine how the system is approaching thermal equilibrium may provide sufficient information to determine if an item has a larger mass than the agreed upon threshold. In previous research into a fast calorimetry attribute technique, a two-dimensional heat flow model of a calorimeter was used to investigate the possibility of determining a mass attribute for plutonium-bearing items using this technique. While the results of this study looked favorable for developing a fast calorimetry attribute technique, additional work was needed to determine the accuracy of the model used to make the calculations. In this paper, the results from the current work investigating
Strom, Daniel J; Joyce, Kevin E; MacLellan, Jay A; Watson, David J; Lynch, Timothy P; Antonio, Cheryl L; Birchall, Alan; Anderson, Kevin K; Zharov, Peter A
2012-04-01
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results is negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty, a likelihood PDF for each individual's measurand is produced. Then using the same assumptions and all the data from the population of individuals, a prior PDF of measurands for the population is produced. The prior PDF is non-negative, and the average is equal to the average of the measurement results for the population. Using Bayes's theorem, posterior PDFs of each individual measurand are calculated. The uncertainty in these bayesian posterior PDFs appears to be all Berkson with no remaining classical component. The method is applied to baseline bioassay data from the Hanford site. The data include (90)Sr urinalysis measurements of 128 people, (137)Cs in vivo measurements of 5337 people and (239)Pu urinalysis measurements of 3270 people. The method produces excellent results for the (90)Sr and (137)Cs measurements, since there are non-zero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the (239)Pu measurements in non-occupationally exposed people because the population average is essentially zero relative to the sensitivity of the measurement technique. The method is shown to give results similar to
Improved sample size determination for attributes and variables sampling
International Nuclear Information System (INIS)
Earlier INMM paper have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, the authors have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed, and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments
Energy Technology Data Exchange (ETDEWEB)
Budnikov, D; Bulatov, M; Jarikhine, I; Lebedev, B; Livke, A; Modenov, A; Morkin, A; Razinkov, S; Safronov, S; Tsaregorodtsev, D; Vlokh, A; Yakovleva, S; Elmont, T; Langner, D; MacArthur, D; Mayo, D; Smith, M; Luke, S J
2005-05-27
An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency {at} 1332 keV 50%) and digital gamma-ray spectrometer DSPEC{sup PLUS}. The neutron multiplicity counter is a three ring counter with 164 {sup 3}He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs.
Energy Technology Data Exchange (ETDEWEB)
Budnikov, D. (Dmitry); Bulatov, M. (Mikhail); Jarikhine, I. (Igor); Lebedev, B. (Boris); Livke, A. (Alexander); Modenov, A.; Morkin, A. (Anton); Razinkov, S. (Sergei); Tsaregorodtsev, D. (Dmitry); Vlokh, A. (Andrey); Yakovleva, S. (Svetlana); Elmont, T. H. (Timothy H.); Langner, D. C. (Diana C.); MacArthur, D. W. (Duncan W.); Mayo, D. R. (Douglas R.); Smith, M. K. (Morag K.); Luke, S. J. (S. John)
2005-01-01
An attribute verification system (AVNG) with information barriers for mass and isotopics measurements has been designed and its fabrication is nearly completed. The AVNG is being built by scientists at the Russian Federal Nuclear Center-VNIIEF, with support of Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). Such a system could be used to verify the presence of several unclassified attributes of classified material with no classified information release. The system is comprised of a neutron multiplicity counter and gamma-spectrometry system based on a high purity germanium gamma detector (nominal relative efficiency @ 1332 keV 50%) and digital gamma-ray spectrometer DSPEC{sup PLUS}. The neutron multiplicity counter is a three ring counter with 164 {sup 3}He tubes. The system was designed to measure prototype containers 491 mm in diameter and 503 mm high. This paper provides a brief history of the project and documents the progress of this effort with drawings and photographs.
International Nuclear Information System (INIS)
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.
Coherent Uncertainty Analysis of Aerosol Measurements from Multiple Satellite Sensors
Petrenko, M.; Ichoku, C.
2013-01-01
Aerosol retrievals from multiple spaceborne sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS altogether, a total of 11 different aerosol products were comparatively analyzed using data collocated with ground-based aerosol observations from the Aerosol Robotic Network (AERONET) stations within the Multi-sensor Aerosol Products Sampling System (MAPSS, http://giovanni.gsfc.nasa.gov/mapss/ and http://giovanni.gsfc.nasa.gov/aerostat/). The analysis was performed by comparing quality-screened satellite aerosol optical depth or thickness (AOD or AOT) retrievals during 2006-2010 to available collocated AERONET measurements globally, regionally, and seasonally, and deriving a number of statistical measures of accuracy. We used a robust statistical approach to detect and remove possible outliers in the collocated data that can bias the results of the analysis. Overall, the proportion of outliers in each of the quality-screened AOD products was within 12%. Squared correlation coefficient (R2) values of the satellite AOD retrievals relative to AERONET exceeded 0.6, with R2 for most of the products exceeding 0.7 over land and 0.8 over ocean. Root mean square error (RMSE) values for most of the AOD products were within 0.15 over land and 0.09 over ocean. We have been able to generate global maps showing regions where the different products present advantages over the others, as well as the relative performance of each product over different landcover types. It was observed that while MODIS, MISR, and SeaWiFS provide accurate retrievals over most of the landcover types, multi-angle capabilities make MISR the only sensor to retrieve reliable AOD over barren and snow / ice surfaces. Likewise, active sensing enables CALIOP to retrieve aerosol properties over bright-surface shrublands more accurately than the other sensors, while POLDER, which is the only one of the sensors capable of measuring polarized aerosols, outperforms other sensors in
Uncertainty principle for experimental measurements: Fast versus slow probes
Hansmann, P.; Ayral, T.; Tejeda, A.; Biermann, S.
2016-02-01
The result of a physical measurement depends on the time scale of the experimental probe. In solid-state systems, this simple quantum mechanical principle has far-reaching consequences: the interplay of several degrees of freedom close to charge, spin or orbital instabilities combined with the disparity of the time scales associated to their fluctuations can lead to seemingly contradictory experimental findings. A particularly striking example is provided by systems of adatoms adsorbed on semiconductor surfaces where different experiments - angle-resolved photoemission, scanning tunneling microscopy and core-level spectroscopy - suggest different ordering phenomena. Using most recent first principles many-body techniques, we resolve this puzzle by invoking the time scales of fluctuations when approaching the different instabilities. These findings suggest a re-interpretation of ordering phenomena and their fluctuations in a wide class of solid-state systems ranging from organic materials to high-temperature superconducting cuprates.
Institute of Scientific and Technical Information of China (English)
Ren Bo; Lu Zhenzhou; Zhou Changcong
2014-01-01
For structural systems with both epistemic and aleatory uncertainties, research on quantifying the contribution of the epistemic and aleatory uncertainties to the failure probability of the systems is conducted. Based on the method of separating epistemic and aleatory uncertainties in a variable, the core idea of the research is firstly to establish a novel deterministic transition model for auxiliary variables, distribution parameters, random variables, failure probability, then to propose the improved importance sampling (IS) to solve the transition model. Furthermore, the distribution parameters and auxiliary variables are sampled simultaneously and independently;therefore, the inefficient sampling procedure with an‘‘inner-loop’’ for epistemic uncertainty and an‘‘outer-loop’’ for aleatory uncertainty in traditional methods is avoided. Since the proposed method combines the fast convergence of the proper estimates and searches failure samples in the interesting regions with high efficiency, the proposed method is more efficient than traditional methods for the variance-based failure probability sensitivity measures in the presence of epistemic and aleatory uncertainties. Two numerical examples and one engineering example are introduced for demonstrating the efficiency and precision of the proposed method for structural systems with both epistemic and aleatory uncertainties.
Directory of Open Access Journals (Sweden)
Ren Bo
2014-06-01
Full Text Available For structural systems with both epistemic and aleatory uncertainties, research on quantifying the contribution of the epistemic and aleatory uncertainties to the failure probability of the systems is conducted. Based on the method of separating epistemic and aleatory uncertainties in a variable, the core idea of the research is firstly to establish a novel deterministic transition model for auxiliary variables, distribution parameters, random variables, failure probability, then to propose the improved importance sampling (IS to solve the transition model. Furthermore, the distribution parameters and auxiliary variables are sampled simultaneously and independently; therefore, the inefficient sampling procedure with an “inner-loop” for epistemic uncertainty and an “outer-loop” for aleatory uncertainty in traditional methods is avoided. Since the proposed method combines the fast convergence of the proper estimates and searches failure samples in the interesting regions with high efficiency, the proposed method is more efficient than traditional methods for the variance-based failure probability sensitivity measures in the presence of epistemic and aleatory uncertainties. Two numerical examples and one engineering example are introduced for demonstrating the efficiency and precision of the proposed method for structural systems with both epistemic and aleatory uncertainties.
A generalized measurement model to quantify health: the multi-attribute preference response model.
Directory of Open Access Journals (Sweden)
Paul F M Krabbe
Full Text Available After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.
Multi-attribute integrated measurement of node importance in complex networks
Wang, Shibo; Zhao, Jinlou
2015-11-01
The measure of node importance in complex networks is very important to the research of networks stability and robustness; it also can ensure the security of the whole network. Most researchers have used a single indicator to measure the networks node importance, so that the obtained measurement results only reflect certain aspects of the networks with a loss of information. Meanwhile, because of the difference of networks topology, the nodes' importance should be described by combining the character of the networks topology. Most of the existing evaluation algorithms cannot completely reflect the circumstances of complex networks, so this paper takes into account the degree of centrality, the relative closeness centrality, clustering coefficient, and topology potential and raises an integrated measuring method to measure the nodes' importance. This method can reflect nodes' internal and outside attributes and eliminate the influence of network structure on the node importance. The experiments of karate network and dolphin network show that networks topology structure integrated measure has smaller range of metrical result than a single indicator and more universal. Experiments show that attacking the North American power grid and the Internet network with the method has a faster convergence speed than other methods.
Propagation of systematic uncertainty due to data reduction in transmission measurement of iron
International Nuclear Information System (INIS)
A technique of determinantal inequalities to estimate the bounds for statistical and systematic uncertainties in neutron cross section measurement have been developed. In the measurement of neutron cross section, correlation is manifested due to the process of measurement and due to many systematic components like geometrical factor, half life, back scattering etc. However propagation of experimental uncertainties through the reduction of cross section data is itself a complicated procedure and has been inviting attention in recent times. The concept of determinantal inequalities to a transmission measurement of iron cross section and demonstration of how in such data reduction procedures the systematic uncertainty dominates over the statistical and estimate their individual bounds have been applied in this paper. (author). 2 refs., 1 tab
Directory of Open Access Journals (Sweden)
Emanuela BELDEAN
2013-09-01
Full Text Available The measurement uncertainty is a quantitativeindicator of the results quality, meaning how well theresult represents the value of the quantity beingmeasured. It is a relatively new concept and severalguides and regulations were elaborated in order tofacilitate laboratories to evaluate it. The uncertaintycomponents are quantified based on data fromrepeated measurements, previous measurements,knowledge of the equipment and experience of themeasurement. Uncertainity estimation involves arigorous evaluation of possible sources of uncertaintyand good knowledge of the measurement procedure.The case study presented in this paper revealed thebasic steps in uncertainty calculation for formaldehydeemission from wood-based panels determined by the1m3 Chamber method. Based on a very well definedIshikawa Diagram, the expanded uncertainty of0.044mg/m3 for k=2, at 95% confidence level wasestablished.
Possolo, Antonio
2016-02-01
The current guidelines for the evaluation and expression of the uncertainty of NIST measurement results were originally published in 1993 as NIST Technical Note 1297, which was last revised in 1994. NIST is now updating its principles and procedures for uncertainty evaluation to address current and emerging needs in measurement science that Technical Note 1297 could not have anticipated or contemplated when it was first conceived. Although progressive and forward-looking, this update is also conservative because it does not require that current practices for uncertainty evaluation be abandoned or modified where they are fit for purpose and when there is no compelling reason to do otherwise. The updated guidelines are offered as a Simple Guide intended to be deployed under the NIST policy on Measurement Quality, and are accompanied by a rich collection of examples of application drawn from many different fields of measurement science.
Screening-level estimates of mass discharge uncertainty from point measurement methods
The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...
Energy Technology Data Exchange (ETDEWEB)
WILLS, C.E.
1999-12-06
This report examines the contributing factors to NDA measurement uncertainty at WRAP. The significance of each factor on the TMU is analyzed, and a final method is given for determining the TMU for NDA measurements at WRAP. As more data becomes available, and WRAP gains in operational experience, this report will be reviewed semi-annually and updated as necessary.
Peltoniemi, J.I.; Hakala, T.; Suomalainen, J.M.; Honkavaara, E.; Markelin, L.; Gritsevich, M.; Eskelinen, J.; Jaanson, P.; Ikonen, E.
2014-01-01
The measurement uncertainty and traceability of the Finnish Geodetic Institutes¿s field gonio-spectro-polarimeter FIGIFIGO have been assessed. First, the reference standard (Spectralon sample) was measured at the National Standard Laboratory of MIKES-Aalto. This standard was transferred to FGI¿s fie
Low uncertainty measurements of bidirectional reflectance factor on the NPOESS/VIIRS solar diffuser
Lessel, Kristen; McClain, Stephen
2007-09-01
An illuminated Solar Diffuser is the calibration source for the VIS/NIR bands on the NPOESS/VIIRS sensor. We completed a set of BRF measurements to fully characterize the distribution of scattered light from the solar diffuser. NPOESS/VIIRS has an overall VIS/NIR radiometric calibration uncertainty requirement of 2%(1 sigma), of which 1.32% was allocated to the characterization of the BRF. In order to meet this requirement, we modified the existing goniometer and measurement procedure used on MODIS. Modifications include sample yoke redesign, periodic measurements of the lamp polarization coupled with stability measurements, modifications to source optics, and stray light reduction. We measured BRF in 6 spectral wavebands for 9 out-of-plane illumination angles and 2 view angles. We achieved NIST traceable measurements with an uncertainty ranging from 1.09% to 1.32%. Our measurements of a smaller Spectralon TM sample match NIST measurements of the same sample to better than 0.5%. These requirements are nominally the same as achieved on MODIS. As a result of instrument upgrades, we currently meet this overall uncertainty while having included additional uncertainty terms.
A super-resolution approach for uncertainty estimation of PIV measurements
Sciacchitano, A.; Wieneke , B.; Scarano, F.
2012-01-01
A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of successi
Energy Technology Data Exchange (ETDEWEB)
Bamberger, Judith A.; Piepel, Gregory F.; Enderlin, Carl W.; Amidan, Brett G.; Heredia-Langner, Alejandro
2015-09-10
Understanding how uncertainty manifests itself in complex experiments is important for developing the testing protocol and interpreting the experimental results. This paper describes experimental and measurement uncertainties, and how they can depend on the order of performing experimental tests. Experiments with pulse-jet mixers in tanks at three scales were conducted to characterize the performance of transient-developing periodic flows in Newtonian slurries. Other test parameters included the simulant, solids concentration, and nozzle exit velocity. Critical suspension velocity and cloud height were the metrics used to characterize Newtonian slurry flow associated with mobilization and mixing. During testing, near-replicate and near-repeat tests were conducted. The experimental results were used to quantify the combined experimental and measurement uncertainties using standard deviations and percent relative standard deviations (%RSD) The uncertainties in critical suspension velocity and cloud height tend to increase with the values of these responses. Hence, the %RSD values are the more appropriate summary measure of near-replicate testing and measurement uncertainty.
Michael F. Frimpon
2012-01-01
The selection of a school leader is a multi attribute problem that needs to be addressed taking into considerationthe peculiar needs of an institution. This paper is intended to specify the critical success factors (CSFs) of collegeleaders as perceived by students. A survey comprising the 37 attributes of The Leaders Attributes Inventory (LAI)of Moss was given to the students in a local university to determine their best 10. The 10 selected attributes weremapped onto the Leadership Effectiven...
Directory of Open Access Journals (Sweden)
Adamczak Stanisław
2014-08-01
Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.
Uncertainty of pin height measurement for the determination of wear in pin-on-plate test
DEFF Research Database (Denmark)
Drago, Nicola; De Chiffre, Leonardo; Poulios, Konstantinos
2014-01-01
machine (CMM), achieving an expanded measurement uncertainty (k = 2) better than 1 mm. A simple dedicated fixture adaptable to workshop environment was developed and its metrological capability investigated, estimating an average uncertainty of measurement in the order of 5 mm (k = 2). Fixture......The paper concerns measurement of pin height for the determination of wear in a pin-on-plate (POP) or pin-on-disc (POD) test, where a pin is mounted on a holder that can be fixed on the test rig and removed for measurements. The amount of wear is assessed as difference of pin height before...... and after the test, using the distance between holder plane and pin friction plane as measurand. A series of measurements were performed in connection with POP testing of different friction material pins mounted on an aluminium holder. Pin height measurements were carried out on a coordinate measuring...
Uncertainties of size measurements in electron microscopy characterization of nanomaterials in foods
DEFF Research Database (Denmark)
Dudkiewicz, Agnieszka; Boxall, Alistair B. A.; Chaudhry, Qasim;
2015-01-01
Electron microscopy is a recognized standard tool for nanomaterial characterization, and recommended by the European Food Safety Authority for the size measurement of nanomaterials in food. Despite this, little data have been published assessing the reliability of the method, especially for size...... measurement of nanomaterials characterized by a broad size distribution and/or added to food matrices. This study is a thorough investigation of the measurement uncertainty when applying electron microscopy for size measurement of engineered nanomaterials in foods. Our results show that the number of measured...... particles was only a minor source of measurement uncertainty for nanomaterials in food, compared to the combined influence of sampling, sample preparation prior to imaging and the image analysis. The main conclusion is that to improve the measurement reliability, care should be taken to consider...
pyMCZ: Oxygen abundances calculations and uncertainties from strong-line flux measurements
Bianco, Federica B.; Modjaz, Maryam; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-05-01
pyMCZ calculates metallicity according to a number of strong line metallicity diagnostics from spectroscopy line measurements and obtain uncertainties from the line flux errors in a Monte Carlo framework. Given line flux measurements and their uncertainties, pyMCZ produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 68% confidence regions. The code can output the full MC distributions and their kernel density estimates.
Modelling and Measurement Uncertainty Estimation for Integrated AFM-CMM Instrument
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Bariani, Paolo; De Chiffre, Leonardo
2005-01-01
This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows...... compensation of such errors. A geometrical model of the instrument was produced, describing the interaction between AFM and CMM systematic errors. The model parameters were quantified through calibration, and the model used for establishing an optimised measurement procedure for surface mapping. A maximum...... uncertainty of 0.8% was achieved for the case of surface mapping of 1.2*1.2 mm2 consisting of 49 single AFM scanned areas....
Uncertainty analysis of signal deconvolution using a measured instrument response function
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.; Cerjan, C.; Eckart, M. J.; Grim, G. P.; Hatarik, R.; Moore, A. S.; Munro, D. H.; Phillips, T.; Sayre, D. B.
2016-11-01
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). In the case investigated here, the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to determine the uncertainty estimate of the physical model's parameters. We apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimum physical parameters.
Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty
International Nuclear Information System (INIS)
To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96
Calculation of the detection limit in radiation measurements with systematic uncertainties
Energy Technology Data Exchange (ETDEWEB)
Kirkpatrick, J.M., E-mail: john.kirkpatrick@canberra.com; Russ, W.; Venkataraman, R.; Young, B.M.
2015-06-01
The detection limit (L{sub D}) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.
A reference material for establishing uncertainties in full-field displacement measurements
International Nuclear Information System (INIS)
A simple reference material for establishing the minimum measurement uncertainty of optical systems for measuring 3D surface displacement fields in deforming objects is described and its use demonstrated by employing 3D digital image correlation as an exemplar technique. The reference material consists of a stepped bar, whose dimensions can be scaled to suit the application, and that can be clamped rigidly at its thick end to create an idealized cantilever. The cantilever was excited at resonance to generate out-of-plane displacements and, in a separate experiment, loaded statically in-plane to provide in-plane displacement fields. The displacements were measured using 3D digital image correlation and compared to the predicted displacement fields derived from tip deflections obtained using a calibrated transducer that provided traceability to the national standard for length. The minimum measurement uncertainties were evaluated by comparing the measured and predicted displacement fields, taking account of the uncertainties in the input parameters for the predictions. It was found that the minimum measurement uncertainties were less than 3% for the Cartesian components of displacement present during static in-plane bending and less than 3 µm for out-of-plane displacements during dynamic loading. It was concluded that this reference material was more straightforward to use, more versatile and yielded comparable results relative to an earlier design. (paper)
International Nuclear Information System (INIS)
This paper summarizes how an Electromagnetic coil (EM coil) measurement technique can be used to discriminate between plutonium metal, plutonium oxide, and mixtures of these two materials inside sealed storage containers. Measurement results are from a variety of metals and Aluminium oxide in two different container types, the carbon steel AL-R8 and the stainless steel AT-400R. Within these container types two scenarios have been explored. 1.) The same configuration made from different metals for demonstrating material property effects. 2.) The same metal configured differently to demonstrate how mass distribution affects the EM signature. This non-radiation measurement method offers verification of the 'presence of metal/absence of oxide' attribute in less than a minute. In January 2001, researches at Pacific Northwest Laboratory showed this method to discriminate between aluminium and aluminium oxide placed inside an AT-400R (a total wall thickness of over 2.5 cm) storage container. Subsequent experimental and theoretical investigations into adapting the EM coil technique for arms control applications, suggests a similar response for plutonium and plutonium oxide. This conclusion is consistent with the fact that all metals are electrically conductive while most oxides are electrical insulators (non-conductors). (author)
Directory of Open Access Journals (Sweden)
Marinel Popescu
2014-09-01
Full Text Available The development of the digital signal processors and their implementation in measuring technique has led to the manufacturing of power analyzers used as multifunction meters in industry, automation, tests and laboratory activities, monitoring and control of processes, etc. The parameters of a three-phase system can be known if the phase currents, the phase voltages and the phase difference between them can be known.A power analyzer has six inputs for currents and voltages measuring signals. The paper presents a method of determination of errors and uncertainties of electrical quantities measurement using a power analyzer associated with external transducers. The best estimation of measured quantity and uncertainty of measurement are used to report the result of measurement process.
Retzbach, Joachim; Otto, Lukas; Maier, Michaela
2016-08-01
Many scholars have argued for the need to communicate openly not only scientific successes to the public but also limitations, such as the tentativeness of research findings, in order to enhance public trust and engagement. Yet, it has not been quantitatively assessed how the perception of scientific uncertainties relates to engagement with science on an individual level. In this article, we report the development and testing of a new questionnaire in English and German measuring the perceived uncertainty of scientific evidence. Results indicate that the scale is reliable and valid in both language versions and that its two subscales are differentially related to measures of engagement: Science-friendly attitudes were positively related only to 'subjectively' perceived uncertainty, whereas interest in science as well as behavioural engagement actions and intentions were largely uncorrelated. We conclude that perceiving scientific knowledge to be uncertain is only weakly, but positively related to engagement with science. PMID:25814513
International Nuclear Information System (INIS)
Environmental radioactivity measurements are mainly affected by counting uncertainties. In this report the uncertainties associated to certain functions related to activity concentration calculations are determined. Some practical exercise are presented to calculate the uncertainties associated to: a) Chemical recovery of a radiochemical separation when employing tracers (i.e. Pu and Am purification from a sediment sample). b) Indirect determination of a mother radionuclide through one of its daughters (i. e. ''210 Pb quantification following its daughter ''210 Po building-up activity). c) Time span from last separation date of one of the components of a disintegration chain (i.e. Am last purification date from a nuclear weapons following ''241 Am and ''241 Pu measurements). Calculations concerning example b) and c) are based on Baterman equations, regulating radioactive equilibria. Although the exercises here presented are performed with certain radionuclides, they could be applied as generic procedures for other alpha-emitting radioelements
Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik
2012-01-01
A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.
Retzbach, Joachim; Otto, Lukas; Maier, Michaela
2016-08-01
Many scholars have argued for the need to communicate openly not only scientific successes to the public but also limitations, such as the tentativeness of research findings, in order to enhance public trust and engagement. Yet, it has not been quantitatively assessed how the perception of scientific uncertainties relates to engagement with science on an individual level. In this article, we report the development and testing of a new questionnaire in English and German measuring the perceived uncertainty of scientific evidence. Results indicate that the scale is reliable and valid in both language versions and that its two subscales are differentially related to measures of engagement: Science-friendly attitudes were positively related only to 'subjectively' perceived uncertainty, whereas interest in science as well as behavioural engagement actions and intentions were largely uncorrelated. We conclude that perceiving scientific knowledge to be uncertain is only weakly, but positively related to engagement with science.
Ztoupis, I N; Gonos, I F; Stathopulos, I A
2013-11-01
Measurements of power frequency electric and magnetic fields from alternating current power lines are carried out in order to evaluate the exposure levels of the human body on the general public. For any electromagnetic field measurement, it is necessary to define the sources of measurement uncertainty and determine the total measurement uncertainty. This paper is concerned with the problems of measurement uncertainty estimation, as the measurement uncertainty budget calculation techniques recommended in standardising documents and research studies are barely described. In this work the total uncertainty of power frequency field measurements near power lines in various measurement sites is assessed by considering not only all available equipment data, but also contributions that depend on the measurement procedures, environmental conditions and characteristics of the field source, which are considered to increase the error of measurement. A detailed application example for power frequency field measurements is presented here by accredited laboratory.
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-10-01
The space borne measurements provide global view of atmospheric aerosol distribution. The Ozone Monitoring Instrument (OMI) on board NASAs Earth Observing System (EOS) Aura satellite is a Dutch-Finnish nadir-viewing solar backscatter spectrometer measuring in the ultraviolet and visible wavelengths. OMI measures several trace gases and aerosols that are important in many air quality and climate studies. The OMI aerosol measurements are used, for example, for detecting volcanic ash plumes, wild fires and transportation of desert dust. We present a methodology for improving the uncertainty quantification in the aerosols retrieval algorithm. We have used the OMI measurements in this feasibility study. Our focus is on the uncertainties originating from the pre-calculated aerosol models. These models are never complete descriptions of the reality. This aerosol model uncertainty is estimated using Gaussian processes with computational tools from spatial statistics. Our approach is based on smooth systematic differences between the observed and modelled reflectances. When acknowledging this model inadequacy in the estimation of aerosol optical thickness (AOT), the uncertainty estimates are more realistic. We present here a real world example of applying the methodology.
Investment in flood protection measures under climate change uncertainty. An investment decision
Energy Technology Data Exchange (ETDEWEB)
Bruin, Karianne de
2012-11-01
Recent river flooding in Europe has triggered debates among scientists and policymakers on future projections of flood frequency and the need for adaptive investments, such as flood protection measures. Because there exists uncertainty about the impact of climate change of flood risk, such investments require a careful analysis of expected benefits and costs. The objective of this paper is to show how climate change uncertainty affects the decision to invest in flood protection measures. We develop a model that simulates optimal decision making in flood protection, it incorporates flexible timing of investment decisions and scientific uncertainty on the extent of climate change impacts. This model allows decision-makers to cope with the uncertain impacts of climate change on the frequency and damage of river flood events and minimises the risk of under- or over-investment. One of the innovative elements is that we explicitly distinguish between structural and non-structural flood protection measures. Our results show that the optimal investment decision today depends strongly on the cost structure of the adaptation measures and the discount rate, especially the ratio of fixed and weighted annual costs of the measures. A higher level of annual flood damage and later resolution of uncertainty in time increases the optimal investment. Furthermore, the optimal investment decision today is influenced by the possibility of the decision-maker to adjust his decision at a future moment in time.(auth)
Account for uncertainties of control measurements in the assessment of design margin factors
International Nuclear Information System (INIS)
The paper discusses the feasibility of accounting for uncertainties of control measurements in estimation of design margin factors. The feasibility is also taken into consideration proceeding from the fact how much the processed measured data were corrected by a priori calculated data of measurable parameters. The possibility and feasibility of such data correction is demonstrated by the authors with the help of Bayes theorem famous in mathematical statistics. (Authors)
Energy Technology Data Exchange (ETDEWEB)
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
A method to analyse measurement invariance under uncertainty in between-subjects design.
Martínez, José A; Ruiz Marin, Manuel; Vivo Molina, Maria del Carmen
2012-11-01
In this research we have introduced a new test (H-test) for analyzing scale invariance in between group designs, and considering uncertainty in individual responses, in order to study the adequacy of disparate rating and visual scales for measuring abstract concepts. The H-test is easy to compute and, as a nonparametric test, does not require any a priori distribution of the data nor conditions on the variances of the distributions to be tested. We apply this test to measure perceived service quality of consumers of a sports services. Results show that, without considering uncertainty, the 1-7 scale is invariant, in line with the related works regarding this topic. However, de 1-5 scale and the 1-7 scale are invariant when adding uncertainty to the analysis. Therefore, adding uncertainty importantly change the conclusions regarding invariance analysis. Both types of visual scales are not invariant in the uncertainty scenario. Implications for the use of rating scales are discussed.
Evaluation of the measurement uncertainty in automated long-term sampling of PCDD/PCDFs.
Vicaretti, M; D'Emilia, G; Mosca, S; Guerriero, E; Rotatori, M
2013-12-01
Since the publication of the first version of European standard EN-1948 in 1996, long-term sampling equipment has been improved to a high standard for the sampling and analysis of polychlorodibenzo-p-dioxin (PCDD)/polychlorodibenzofuran (PCDF) emissions from industrial sources. The current automated PCDD/PCDF sampling systems enable to extend the measurement time from 6-8 h to 15-30 days in order to have data values better representative of the real pollutant emission of the plant in the long period. EN-1948:2006 is still the European technical reference standard for the determination of PCDD/PCDF from stationary source emissions. In this paper, a methodology to estimate the measurement uncertainty of long-term automated sampling is presented. The methodology has been tested on a set of high concentration sampling data resulting from a specific experience; it is proposed with the intent that it is to be applied on further similar studies and generalized. A comparison between short-term sampling data resulting from manual and automated parallel measurements has been considered also in order to verify the feasibility and usefulness of automated systems and to establish correlations between results of the two methods to use a manual method for calibration of automatic long-term one. The uncertainty components of the manual method are analyzed, following the requirements of EN-1948-3:2006, allowing to have a preliminary evaluation of the corresponding uncertainty components of the automated system. Then, a comparison between experimental data coming from parallel sampling campaigns carried out in short- and long-term sampling periods is realized. Long-term sampling is more reliable to monitor PCDD/PCDF emissions than occasional short-term sampling. Automated sampling systems can assure very useful emission data both in short and long sampling periods. Despite this, due to the different application of the long-term sampling systems, the automated results could not be
International Nuclear Information System (INIS)
This paper presents a new approach to the restoration of dynamic influenced measurement uncertainties in optical precision coordinate metrology (OPCM) using image sensors to measure geometrical features. Dynamic measurements within the context of this paper are based upon relative motion between the imaging setup (CCD-camera and optical system) and the measuring object respectively the measuring scene. The dynamic image acquisition causes image motion blur effects, which downgrades the uncertainties of the measurand. The approach presented deals with a new technique to restore motion degraded images using different methods to analyze important image features by extending the famous state of the art Richardson-Lucy image restoration technique using a new convergence criteria based on the variation of the detectable sub-pixel edge position of each iteration
Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A
2011-07-01
The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do.
Tracy, Allison J.; Erkut, Sumru; Porche, Michelle V.; Kim, Jo; Charmaraman, Linda; Grossman, Jennifer M.; Ceder, Ineke; Garcia, Heidie Vazquez
2010-01-01
In this article, we operationalize identification of mixed racial and ethnic ancestry among adolescents as a latent variable to (a) account for measurement uncertainty, and (b) compare alternative wording formats for racial and ethnic self-categorization in surveys. Two latent variable models were fit to multiple mixed-ancestry indicator data from…
Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.
Tang, Jaw-Luen; Wang, Jian-Neng
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.
Use of Total Possibilistic Uncertainty as a Measure of Students' Modelling Capacities
Voskoglou, Michael Gr.
2010-01-01
We represent the main stages of the process of mathematical modelling as fuzzy sets in the set of the linguistic labels of negligible, low intermediate, high and complete success by students in each of these stages and we use the total possibilistic uncertainty as a measure of students' modelling capacities. A classroom experiment is also…
A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.
Chew, Gina; Walczyk, Thomas
2012-03-01
Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.
Aid instability as a measure of uncertainty and the positive impact of aid on growth
Lensink, R; Morrissey, O
2000-01-01
This article contributes to the literature on aid and economic growth. We posit that uncertainty, measured as the instability of aid receipts, will influence the relationship between aid and investment, how recipient governments respond to aid, and will capture the fact that some countries are espec
Measuring Cross-Section and Estimating Uncertainties with the fissionTPC
Energy Technology Data Exchange (ETDEWEB)
Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-30
The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.
Biogenic carbon in combustible waste: Waste composition, variability and measurement uncertainty
DEFF Research Database (Denmark)
Larsen, Anna Warberg; Fuglsang, Karsten; Pedersen, Niels H.;
2013-01-01
, the measurement uncertainties related to the two approaches were determined. Two flue gas sampling campaigns at a full-scale waste incinerator were included: one during normal operation and one with controlled waste input. Estimation of carbon contents in the main waste types received was included. Both the 14C...... method and the balance method represented promising methods able to provide good quality data for the ratio between biogenic and fossil carbon in waste. The relative uncertainty in the individual experiments was 7–10% (95% confidence interval) for the 14C method and slightly lower for the balance method....
International Nuclear Information System (INIS)
This report summarizes the procedure used to calculate the uncertainties associated to environmental radioactivity measurements, focusing on those obtained by radiochemical separation in which tracers have been added. Uncertainties linked to activity concentration calculations, isotopic rat iso, inventories, sequential leaching data, chronology dating by using C.R.S. model and duplicate analysis are described in detail. The objective of this article is to serve as a guide to people not familiarized with this kind of calculations, showing clear practical examples. The input of the formulas and all the data needed to achieve these calculations into the Lotus 1, 2, 3 WTN is outlined as well. (Author) 13 refs
International Nuclear Information System (INIS)
This report summarizes the procedure used to calculate the uncertainties associated to environmental radioactivity measurements. focusing on those obtained by radiochemical separation in which tracers have been added. Uncertainties linked to activity concentration calculations, isotopic ratio, inventories, sequential leaching data, chronology dating by using C.R.S model and duplicate analysis are described in detail. The objective of this article is to serve as a guide to people not familiarized with this kind of calculations, showing clear practical examples. The input of the formulas and all the data needed to achieve these calculations into the Lotus 1,2,3, WIN is outlined as well. (Author)
Directory of Open Access Journals (Sweden)
K. J. Franz
2011-11-01
Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.
de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram
2012-02-01
The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I×J×K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 2(7-4) Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.
Optimal entropic uncertainty relation for successive measurements in quantum information theory
Indian Academy of Sciences (India)
M D Srinivas
2003-06-01
We derive an optimal bound on the sum of entropic uncertainties of two or more observables when they are sequentially measured on the same ensemble of systems. This optimal bound is shown to be greater than or equal to the bounds derived in the literature on the sum of entropic uncertainties of two observables which are measured on distinct but identically prepared ensembles of systems. In the case of a two-dimensional Hilbert space, the optimum bound for successive measurements of two-spin components, is seen to be strictly greater than the optimal bound for the case when they are measured on distinct ensembles, except when the spin components are mutually parallel or perpendicular.
LOWERING UNCERTAINTY IN CRUDE OIL MEASUREMENT BY SELECTING OPTIMIZED ENVELOPE COLOR OF A PIPELINE
Directory of Open Access Journals (Sweden)
Morteza Saadat
2011-01-01
Full Text Available Lowering uncertainty in crude oil volume measurement has been widely considered as one of main purposes in an oil export terminal. It is found that crude oil temperature at metering station has big effects on measured volume and may cause big uncertainty at the metering point. As crude oil flows through an aboveground pipeline, pick up the solar radiation and heat up. This causes the oil temperature at the metering point to rise and higher uncertainty to be created. The amount of temperature rise is depended on exterior surface paint color. In the Kharg Island, there is about 3 km distance between the oil storage tanks and the metering point. The oil flows through the pipeline due to gravity effects as storage tanks are located 60m higher than the metering point. In this study, an analytical model has been conducted for predicting oil temperature at the pipeline exit (the metering point based on climate and geographical conditions of the Kharg Island. The temperature at the metering point has been calculated and the effects of envelope color have been investigated. Further, the uncertainty in the measurement system due to temperature rise has been studied.
历史成本计量属性修正方法研究%Research on Correction Methods of Historical Cost Measurement Attribute
Institute of Scientific and Technical Information of China (English)
刘群; 廖正方
2013-01-01
会计是一个确认、计量、记录和报告的过程，经过长期发展演进，会计确认、记录和报告的技术方法相对成熟，而会计计量由于计量环境的不确定性不断面临着挑战。会计未来面临的挑战实质上是会计计量属性面临的挑战。我国现行会计准则在历史成本计量属性的基础上引进其他多元计量属性，并增加资产减值会计核算，根本目的在于对计量对象的历史成本信息进行修正，以提高会计信息的相关性，提升会计信息作为公共产品的社会价值。%Accounting is a confirmation, measurement, record and report on the process, after a long-term development, technology method of accounting confirmation, recording and reporting have reached relatively mature, while the accounting measurement due to the measurement object environmental uncertainties continue to face challenges. Therefore, future chal-lenge is essentially the challenges facing accounting measurement attribute. China's current accounting standards introduce oth-er multiple measurement attributes on the basis of the original single historical cost measurement attributes, and increase the asset impairment accounting, the fundamental purpose is to improve the relevance of accounting information, and improve the social values of the accounting information as a public product.
Environmental Effects on Measurement Uncertainties of Time-of-Flight Cameras
DEFF Research Database (Denmark)
Gudmundsson, Sigurjon Arni; Aanæs, Henrik; Larsen, Rasmus
2007-01-01
on the accuracy. To mention a few: The angle of the objects to the emitted light and the scattering effects of near objects. In this paper a general overview of known such inaccuracy factors are described, followed by experiments illustrating the additional uncertainty factors. Specifically we give a better...... description of how a surface color intensity influences the depth measurement, and illustrate how multiple reflections influence the resulting depth measurement....
Dead time effect on the Brewer measurements: correction and estimated uncertainties
Fountoulakis, Ilias; Redondas, Alberto; Bais, Alkiviadis F.; Rodriguez-Franco, Juan José; Fragkos, Konstantinos; Cede, Alexander
2016-01-01
Brewer spectrophotometers are widely used instruments which perform spectral measurements of the direct, the scattered and the global solar UV irradiance. By processing these measurements a variety of secondary products can be derived such as the total columns of ozone (TOC), sulfur dioxide and nitrogen dioxide and aerosol optical properties. Estimating and limiting the uncertainties of the final products is of critical importance. High-quality data have a lot of applications a...
International Nuclear Information System (INIS)
This paper is a summary of broadband measurement values of radiofrequency radiation around GSM base stations in the vicinity of residential areas in Belgrade and 12 other cities in Serbia. It will be useful for determining non-ionizing radiation exposure levels of the general public in the future. The purpose of this paper is also an appropriate representation of basic information on the evaluation of measurement uncertainty. (author)
Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo
2015-10-01
Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper.
Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo
2015-10-01
Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper. PMID:26377833
International Nuclear Information System (INIS)
A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)
Lukewich, Julia; Corbin, Renée; Elizabeth G VanDenKerkhof; Edge, Dana S.; Williamson, Tyler; Tranmer, Joan E.
2014-01-01
Rationale, aims and objectives Given the increasing emphasis being placed on managing patients with chronic diseases within primary care, there is a need to better understand which primary care organizational attributes affect the quality of care that patients with chronic diseases receive. This study aimed to identify, summarize and compare data collection tools that describe and measure organizational attributes used within the primary care setting worldwide. Methods Systematic search and r...
Directory of Open Access Journals (Sweden)
Soumyabrata eDey
2014-06-01
Full Text Available Attention Deficit Hyperactive Disorder (ADHD is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49% and test data sets (73.55%. Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.
Energy Technology Data Exchange (ETDEWEB)
Newsom, Rob [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-03-01
In March and April of 2015, the ARM Doppler lidar that was formerly operated at the Tropical Western Pacific site in Darwin, Australia (S/N 0710-08) was deployed to the Boulder Atmospheric Observatory (BAO) for the eXperimental Planetary boundary-layer Instrument Assessment (XPIA) field campaign. The goal of the XPIA field campaign was to investigate methods of using multiple Doppler lidars to obtain high-resolution three-dimensional measurements of winds and turbulence in the atmospheric boundary layer, and to characterize the uncertainties in these measurements. The ARM Doppler lidar was one of many Doppler lidar systems that participated in this study. During XPIA the 300-m tower at the BAO site was instrumented with well-calibrated sonic anemometers at six levels. These sonic anemometers provided highly accurate reference measurements against which the lidars could be compared. Thus, the deployment of the ARM Doppler lidar during XPIA offered a rare opportunity for the ARM program to characterize the uncertainties in their lidar wind measurements. Results of the lidar-tower comparison indicate that the lidar wind speed measurements are essentially unbiased (~1cm s-1), with a random error of approximately 50 cm s-1. Two methods of uncertainty estimation were tested. The first method was found to produce uncertainties that were too low. The second method produced estimates that were more accurate and better indicators of data quality. As of December 2015, the first method is being used by the ARM Doppler lidar wind value-added product (VAP). One outcome of this work will be to update this VAP to use the second method for uncertainty estimation.
A Monte-Carlo investigation of the uncertainty of acoustic decay measurements
DEFF Research Database (Denmark)
Cabo, David Pérez; Seoane, Manuel A. Sobreira; Jacobsen, Finn
2012-01-01
Measurement of acoustic decays can be problematic at low frequencies: short decays cannot be evaluated accurately. Several effects influencing the evaluation will be reviewed in this paper. As new contribution, the measurement uncertainty due to one-third octave band pass filters will be analysed...... been be set up: the model function is a model of the acoustic decays, where the modal density, the resonances of the system, and the amplitude and phase of the normal modes may be considered as random variables. Once the random input variables and the model function are defined, the uncertainty...... of acoustic decay measurements can be estimated. Different filters will be analysed: linear phase FIR and IIR filters both in their direct and time-reversed versions. © European Acoustics Association....
van Dijk, J W E
2014-12-01
Two measurement models for passive dosemeters such as thermoluminescent dosemeter, optically stimulated luminescence, radio-photoluminescence, photographic film or track etch are discussed. The first model considers the dose evaluation with the reading equipment as a single measurement, the one-stage model. The second model considers the build-up of a latent signal or latent image in the detector during exposure and the evaluation using a reader system as two separate measurements, the two-stage model. It is discussed that the two-stage model better reflects the cause and effect relations and the course of events in the daily practice of a routine dosimetry service. The one-stage model will be non-linear in crucial input quantities which can give rise to erroneous behavior of the uncertainty evaluation based on the law of propagation of uncertainty. Input quantities that show an asymmetric probability distributions propagate through the one-stage model in a physically not relevant way.
McDonnell, J D; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-01-01
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models; to estimate model errors and thereby improve predictive capability; to extrapolate beyond the regions reached by experiment; and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, w...
Lacerda, Márcio J.; Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2016-04-01
This paper presents a general framework to cope with full-order ? linear parameter-varying (LPV) filter design subject to inexactly measured parameters. The main novelty is the ability of handling additive and multiplicative uncertainties in the measurements, for both continuous and discrete-time LPV systems, in a unified approach. By conveniently modelling scheduling parameters and uncertainties affecting the measurements, the ? filter design problem can be expressed in terms of robust matrix inequalities that become linear when two scalar parameters are fixed. Therefore, the proposed conditions can be efficiently solved through linear matrix inequality relaxations based on polynomial solutions. Numerical examples are presented to illustrate the improved efficiency of the proposed approach when compared to other methods and, more important, its capability to deal with scenarios where the available strategies in the literature cannot be used.
Directory of Open Access Journals (Sweden)
R. Raj
2015-08-01
Full Text Available Gross primary production (GPP, separated from flux tower measurements of net ecosystem exchange (NEE of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Energy Technology Data Exchange (ETDEWEB)
McDonnell, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schunck, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Higdon, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarich, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, S. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, W. [Michigan State Univ., East Lansing, MI (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Univ. of Warsaw, Warsaw (Poland)
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
Uncertainty Factors for Stage-Specific and Cumulative Results of Indirect Measurements
Datta, B P
2009-01-01
Evaluation of a variable Yd from certain measured variable(s) Xi(s), by making use of their system-specific-relationship (SSR), is generally referred as the indirect measurement. Naturally the SSR may stand for a simple data-translation process in a given case, but a set of equations, or even a cascade of different such processes, in some other case. Further, though the measurements are a priori ensured to be accurate, there is no definite method for examining whether the result obtained at the end of an SSR, specifically a cascade of SSRs, is really representative as the measured Xi-values. Of Course, it was recently shown that the uncertainty (ed) in the estimate (yd) of a specified Yd is given by a specified linear combination of corresponding measurement-uncertainties (uis). Here, further insight into this principle is provided by its application to the cases represented by cascade-SSRs. It is exemplified how the different stage-wise uncertainties (Ied, IIed, ... ed), that is to say the requirements for t...
Liu, Yipeng
2010-01-01
Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An anti-uncertainty constraint in the form of a mixed l2 and l1 norm is deduced from the sparse signal model with MMU. Then we combine the sparse constraint with the anti-uncertainty constraint to get an anti-uncertainty sparse signal recovery operator. Numerical simulations demonstrate that the proposed operator has a better reconstructing performance with the MMU than traditional methods.
DEFF Research Database (Denmark)
Tosello, Guido; Hansen, Hans Nørgaard; Gasparin, Stefania
2010-01-01
Process capability of micro injection moulding was investigated in this paper by calculating the Cp and Cpk statistics. Uncertainty of both optical and tactile measuring systems employed in the quality control of micro injection moulded products was assessed and compared with the specified...... tolerances. Limits in terms of manufacturing process capability as well as of suitability of such measuring systems when employed for micro production inspection were quantitatively determined....
International Nuclear Information System (INIS)
We suggest uncertainty compensation methods for the quantification of nanoscale indentation using atomic force microscopy (AFM). The main error factors in the force–distance curves originated from the difference between theoretical and real shape of AFM tip during nanoscale indentation measurements. For the uncertainty compensations of tip shapes and misalignment of loading axis, we applied the enhanced tip geometry function and Y-scanner moving to the AFM measurements. Three different materials such as Si wafer, glass, and Au film were characterized with these compensation methods. By applying compensation methods, our results show the decreased values from 167% to 39% below 100 nm indenting depth compared with the literature values. These compensation methods applied to thin films will show the advanced quantitative analysis of hardness measurements using nanoscale indenting AFM. - Highlights: • We suggest uncertainty compensation methods for quantitative hardness measurement. • The main errors during indentation are tip geometry and non-uniform loading. • 3D tip characterization is obtained by using atomic force microscope scan. • The compensation methods perform well in thin films below thickness of 100 nm
电磁兼容发射测量中的不确定度%EMC Emission Measurement Uncertainty
Institute of Scientific and Technical Information of China (English)
孙玮
2013-01-01
This paper mainly analyzes the measurement uncertainty of ECM emission. Take the measurement uncertainty of mains terminal disturbance voltage as example, it introduces the purpose of measuring the emission uncertainty, the types of uncertainty source, the assessment method of uncertainty report, and the application of the Ucispr and uncertainty in the coincidence criterion.%本文着重分析电磁兼容发射测量中的不确定度。以评定电源端子骚扰电压的不确定度为例，介绍发射测量不确定度的目的、不确定度源的种类、不确定度报告的评定方法及其Ucispr和不确定度在符合性判据中的运用。
Energy Technology Data Exchange (ETDEWEB)
Vinai, P
2007-10-15
For the development, design and licensing of a nuclear power plant (NPP), a sound safety analysis is necessary to study the diverse physical phenomena involved in the system behaviour under operational and transient conditions. Such studies are based on detailed computer simulations. With the progresses achieved in computer technology and the greater availability of experimental and plant data, the use of best estimate codes for safety evaluations has gained increasing acceptance. The application of best estimate safety analysis has raised new problems that need to be addressed: it has become more crucial to assess as to how reliable code predictions are, especially when they need to be compared against safety limits that must not be crossed. It becomes necessary to identify and quantify the various possible sources of uncertainty that affect the reliability of the results. Currently, such uncertainty evaluations are generally based on experts' opinion. In the present research, a novel methodology based on a non-parametric statistical approach has been developed for objective quantification of best-estimate code uncertainties related to the physical models used in the code. The basis is an evaluation of the accuracy of a given physical model achieved by comparing its predictions with experimental data from an appropriate set of separate-effect tests. The differences between measurements and predictions can be considered stochastically distributed, and thus a statistical approach can be employed. The first step was the development of a procedure for investigating the dependence of a given physical model's accuracy on the experimental conditions. Each separate-effect test effectively provides a random sample of discrepancies between measurements and predictions, corresponding to a location in the state space defined by a certain number of independent system variables. As a consequence, the samples of 'errors', achieved from analysis of the entire
Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty
DEFF Research Database (Denmark)
Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo;
2016-01-01
This paper evaluates and quantifies the repeatability of post-processing settings, such as surface determination, data fitting, and the definition of the datum system, on the uncertainties of Computed Tomography (CT) measurements. The influence of post-processing contributions was determined...... by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....
King, B
2001-11-01
The new laboratory accreditation standard, ISO/IEC 17025, reflects current thinking on good measurement practice by requiring more explicit and more demanding attention to a number of activities. These include client interactions, method validation, traceability, and measurement uncertainty. Since the publication of the standard in 1999 there has been extensive debate about its interpretation. It is the author's view that if good quality practices are already in place and if the new requirements are introduced in a manner that is fit for purpose, the additional work required to comply with the new requirements can be expected to be modest. The paper argues that the rigour required in addressing the issues should be driven by customer requirements and the factors that need to be considered in this regard are discussed. The issues addressed include the benefits, interim arrangements, specifying the analytical requirement, establishing traceability, evaluating the uncertainty and reporting the information. PMID:11768456
Crespy, Charles; Villate, Denis; Lobios, Olivier
2013-01-01
For laser megajoule (LMJ) facility, an accurate procedure for laser pulse energy measurement is a crucial requirement. In this study, the influence of measurement procedure on LMJ calorimeter uncertainty is experimentally and numerically investigated. To this end, a 3D thermal model is developed and two experimental techniques are implemented. The metrological characteristics of both techniques are presented. As a first step, the model is validated by comparing numerical and experimental results. Then, the influence of a large number of parameters considered as likely uncertainty sources on calorimeter response is investigated: wavelength, pulse duration, ambient temperature, laser beam diameter.... The post processing technique procedure is also examined. The paper provides some of the parameters required to allow a robust and efficient calibration procedure to be produced.
Eadie, Gwendolyn; Harris, William
2016-01-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie, Harris, & Widrow (2015) and Eadie & Harris (2016) and builds upon the preliminary reports by Eadie et al (2015a,c). The method uses a distribution function $f(\\mathcal{E},L)$ to model the galaxy and kinematic data from satellite objects such as globular clusters to trace the Galaxy's gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie & Harris (2016), and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and in...
Directory of Open Access Journals (Sweden)
A. Määttä
2013-09-01
Full Text Available We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI. Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Kissel, J C; Richter, K Y; Fenske, R A
1996-02-01
Estimates of soil adherence to skin are required for assessment of dermal exposures to contaminants in soils. Previously available estimates depend heavily on indirect measurements and/or artificial activities and reflect sampling of hands only. Results are presented here from direct measurement of soil loading on skin surfaces of volunteers before and after normal occupational and recreational activities that might reasonably be expected to lead to soil contact. Skin surfaces assayed included hands, forearms, lower legs, faces and/or feet. Observed hand loadings vary over five orders of magnitude (roughly from 10(-3) to 10(2) mg/cm2) and are dependent upon type of activity. Hand loadings within the current default range of 0.2 to 1.0 mg/cm2 were produced by activities providing opportunity for relatively vigorous soil contact (rugby, farming). Loadings less than 0.2 mg/cm2 were found on hands following activities presenting less opportunity for direct soil contact (soccer, professional grounds maintenance) and on other body parts under many conditions. The default range does not, however, represent a worst case. Children playing in mud on the shore of a lake generated geometric mean loadings well in excess of 1 mg/cm2 on hands, arms, legs, and feet. Post-activity average loadings on hands were typically higher than average loadings on other body parts resulting from the same activity. Hand data from limited activities cannot, however, be used to conservatively predict loadings that might occur on other body surfaces without regard to activity since non-hand loadings attributable to higher contact activities exceeded hand loadings resulting from lower contact activities. Differences between pre- and post-activity loadings also demonstrate that dermal contact with soil is episodic. Typical background (pre-activity) geometric mean loadings appear to be on the order of 10(-2) mg/cm2 or less. Because exposures are activity dependent, quantification of dermal exposure
Vesna Režić Dereani; Marijana Matek Sarić
2010-01-01
The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU) determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the tec...
International Nuclear Information System (INIS)
In this work, the systematic errors in temperature measurements in inlet and outlet headers of HTPS coolant channels of Embalse nuclear power plant are evaluated. These uncertainties are necessary for a later evaluation of the channel power maps transferred to the coolant. The power maps calculated in this way are used to compare power distributions using neutronic codes. Therefore, a methodology to correct systematic errors of temperature in outlet feeders and inlet headers is developed in this work. (author)
Beyou, Sébastien; Corpetti, Thomas; Gorthi, Sai; Mémin, Etienne
2013-01-01
International audience This paper proposes a novel multi-scale fluid flow data assimilation approach, which integrates and complements the advantages of a Bayesian sequential assimilation technique, the Weighted Ensemble Kalman filter (WEnKF). The data assimilation proposed in this work incorporates measurement brought by an efficient multiscale stochastic formulation of the well-known Lucas-Kanade (LK) estimator. This estimator has the great advantage to provide uncertainties associated t...
Ozawa, M
2003-01-01
The Heisenberg uncertainty principle states that the product of the noise in a position measurement and the momentum disturbance caused by that measurement should be no less than the limit set by Planck's constant, hbar/2, as demonstrated by Heisenberg's thought experiment using a gamma-ray microscope. Here I show that this common assumption is false: a universally valid trade-off relation between the noise and the disturbance has an additional correlation term, which is redundant when the intervention brought by the measurement is independent of the measured object, but which allows the noise-disturbance product much below Planck's constant when the intervention is dependent. A model of measuring interaction with dependent intervention shows that Heisenberg's lower bound for the noise-disturbance product is violated even by a nearly nondisturbing, precise position measuring instrument. An experimental implementation is also proposed to realize the above model in the context of optical quadrature measurement ...
Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar
2012-05-01
Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.
Han, Eunyoung; Yang, Wonkyung; Lee, Sooyeun; Kim, Eunmi; In, Sangwhan; Choi, Hwakyung; Lee, Sangki; Chung, Heesun; Song, Joon Myong
2011-03-20
The quantitative analysis of 11-nor-D(9)-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in hair requires a sensitive method to detect a low-pg level. Before applying the method to real hair samples, the method was validated; in this study, we examined the uncertainty obtained from around the cut-off level of THCCOOH in hair. We calculated the measurement uncertainty (MU) of THCCOOH in hair as follows: specification of the measurand, identification of parameters using "cause and effect" diagrams, quantification of the uncertainty contributions using three factors, the uncertainty of weighing the hair sample, the uncertainty from calibrators and the calibration curve, and the uncertainty of the method precision. Finally, we calculated the degrees of freedom and the expanded uncertainty (EU). The concentration of THCCOOH in the hair sample with its EU was (0.60 ± 0.1) × 10(-4)ng/mg. The relative uncertainty percent for the measurand 0.60 × 10(-4)ng was 9.13%. In this study, we also selected different concentrations of THCCOOH in real hair samples and then calculated the EU, the relative standard uncertainty (RSU) of the concentration of THCCOOH in the test sample [u(r)(c0)], the relative uncertainty percent, and the effective degree of freedom (v(eff)). When the concentrations of THCCOOH approached the cut-off level, u(r)(c0) and the relative uncertainty percent increased but absolute EU and v(eff) decreased.
Attributes identification of nuclear material by non-destructive radiation measurement methods
International Nuclear Information System (INIS)
Full text: The nuclear materials should be controlled under the regulation of National Safeguard System. The non-destructive analysis method, which is simple and quick, provide a effective process in determining the nuclear materials, nuclear scraps and wastes. The method play a very important role in the fields of nuclear material control and physical protection against the illegal removal and smuggling of nuclear material. The application of non-destructive analysis in attributes identification of nuclear material is briefly described in this paper. The attributes determined by radioactive detection technique are useful tolls to identify the characterization of special nuclear material (isotopic composition, enrichment etc.). (author)
Assessment of adaptation measures to high-mountain risks in Switzerland under climate uncertainties
Muccione, Veruska; Lontzek, Thomas; Huggel, Christian; Ott, Philipp; Salzmann, Nadine
2015-04-01
The economic evaluation of different adaptation options is important to support policy-makers that need to set priorities in the decision-making process. However, the decision-making process faces considerable uncertainties regarding current and projected climate impacts. First, physical climate and related impact systems are highly complex and not fully understood. Second, the further we look into the future, the more important the emission pathways become, with effects on the frequency and severity of climate impacts. Decision on adaptation measures taken today and in the future must be able to adequately consider the uncertainties originating from the different sources. Decisions are not taken in a vacuum but always in the context of specific social, economic, institutional and political conditions. Decision finding processes strongly depend on the socio-political system and usually have evolved over some time. Finding and taking decisions in the respective socio-political and economic context multiplies the uncertainty challenge. Our presumption is that a sound assessment of the different adaptation options in Switzerland under uncertainty necessitates formulating and solving a dynamic, stochastic optimization problem. Economic optimization models in the field of climate change are not new. Typically, such models are applied for global-scale studies but barely for local-scale problems. In this analysis, we considered the case of the Guttannen-Grimsel Valley, situated in the Swiss Bernese Alps. The alpine community has been affected by high-magnitude, high-frequency debris flows that started in 2009 and were historically unprecendented. They were related to thaw of permafrost in the rock slopes of Ritzlihorn and repeated rock fall events that accumulated at the debris fan and formed a sediment source for debris flows and were transported downvalley. An important transit road, a trans-European gas pipeline and settlements were severely affected and partly
Accounting for uncertainty in volumes of seabed change measured with repeat multibeam sonar surveys
Schimel, Alexandre C. G.; Ierodiaconou, Daniel; Hulands, Lachlan; Kennedy, David M.
2015-12-01
Seafloors of unconsolidated sediment are highly dynamic features; eroding or accumulating under the action of tides, waves and currents. Assessing which areas of the seafloor experienced change and measuring the corresponding volumes involved provide insights into these important active sedimentation processes. Computing the difference between Digital Elevation Models (DEMs) obtained from repeat Multibeam Echosounders (MBES) surveys has become a common technique to identify these areas, but the uncertainty in these datasets considerably affects the estimation of the volumes displaced. The two main techniques used to take into account uncertainty in volume estimations are the limitation of calculations to areas experiencing a change in depth beyond a chosen threshold, and the computation of volumetric confidence intervals. However, these techniques are still in their infancy and, as a result, are often crude, seldom used or poorly understood. In this article, we explored a number of possible methodological advances to address this issue, including: (1) using the uncertainty information provided by the MBES data processing algorithm CUBE, (2) adapting fluvial geomorphology techniques for volume calculations using spatially variable thresholds and (3) volumetric histograms. The nearshore seabed off Warrnambool harbour - located in the highly energetic southwest Victorian coast, Australia - was used as a test site. Four consecutive MBES surveys were carried out over a four-months period. The difference between consecutive DEMs revealed an area near the beach experiencing large sediment transfers - mostly erosion - and an area of reef experiencing increasing deposition from the advance of a nearby sediment sheet. The volumes of sediment displaced in these two areas were calculated using the techniques described above, both traditionally and using the suggested improvements. We compared the results and discussed the applicability of the new methodological improvements
DEFF Research Database (Denmark)
Quagliotti, Danilo; Tosello, Guido; Islam, Aminul;
2015-01-01
Surface texture and step height measurements of electrochemically machined cavities have been compared among optical and tactile instruments. A procedure is introduced for correcting possible divergences among the instruments and, ultimately, for evaluating the measurement uncertainty according...
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Coetzee, Melinde
2014-01-01
This study reports the development and validation of the Graduate Skills and Attributes Scale which was initially administered to a random sample of 272 third-year-level and postgraduate-level, distance-learning higher education students. The data were analysed using exploratory factor analysis. In a second study, the scale was administered to a…
Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.
Beechem, Thomas; Yates, Luke; Graham, Samuel
2015-04-01
Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Coutand, M; Cyr, M; Clastres, P
2011-10-01
When mineral wastes are reused in construction materials, a current practice is to evaluate their environmental impact using standard leaching test. However, due to the uncertainty of the measurement, it is usually quite difficult to estimate the pollutant potential compared to other materials or threshold limits. The aim of this paper is to give a quantitative evaluation of the uncertainty of leachate concentrations of cement-based materials, as a function of the number of test performed. The relative standard deviations and relative confidence intervals are determined using experimental data in order to give a global evaluation of the uncertainty of leachate concentrations (determination of total relative standard deviation). Various combinations were realized in order to point out the origin of large dispersion of the results (determination of relative standard deviation linked to analytical measured and to leaching procedure), generalisation was suggested and the results were compared to literature. An actual example was given about the introduction of residue (meat and bone meal bottom ash--MBM-BA) in mortar, leaching tests were carried out on various samples with and without residue MBM-BA. In conclusion large dispersion were observed and mainly due to heterogeneity of materials. So heightened attention needed to analyse leaching result on cement-based materials and further more other tests (e.g. ecotoxicology) should be performed to evaluate the environmental effect of these materials.
Improving parton distribution uncertainties in a W mass measurement at the LHC
Sullivan, Zack
2015-01-01
We reexamine the dominant contribution of parton distribution function (PDF) uncertainties to the W mass measurement, and determine their contribution is +-39(30) MeV when running the Large Hadron Collider at 7(13) TeV. We find that spurious correlations in older PDF sets led to over-optimistic assumptions regarding normalization to Z observables. In order to understand the origin of the large uncertainties we break down the contribution of the PDF errors into effects at the hard matrix element level, in showering, and in sensitivity to finite detector resolutions. Using CT10, CT10W, and charm enhanced PDF sets in comparison to older PDF sets, we develop a robust analysis that examines correlations between transverse mass reconstructions of W and Z decays (scaled by cos $\\theta_W$) to leptons. We find that central leptons (|$\\eta_l$| < 1.3) from W and Z bosons carry the most weight in reducing the PDF uncertainty, and estimate a PDF error of +10/-12 MeV is achievable in a W mass measurement at the LHC. Fur...
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z https://github.com/nyusngroup/pyMCZ.
Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor
Jian-Neng Wang; Jaw-Luen Tang
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.0...
An Evaluation of Test and Physical Uncertainty of Measuring Vibration in Wooden Junctions
DEFF Research Database (Denmark)
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2012-01-01
In the present paper a study of test and material uncertainty in modal analysis of certain wooden junctions is presented. The main structure considered here is a T-junction made from a particleboard plate connected to a spruce beam of rectangular cross section. The size of the plate is 1.2 m by 0.......6 m. The T-junctions represent cut-outs of actual full size floor assemblies. The aim of the experiments is to investigate the underlying uncertainties of both the test method as well as variation in material and craftmanship. For this purpose, ten nominally identical junctions are tested and compared...... to each other in terms of modal parameters such as natural frequencies, modeshapes and damping. Considerations regarding the measurement procedure and test setup are discussed. The results indicate a large variation of the response at modes where the coupling of torsion in the beam to bending of the plate...
Robust framework for PET image reconstruction incorporating system and measurement uncertainties.
Directory of Open Access Journals (Sweden)
Huafeng Liu
Full Text Available In Positron Emission Tomography (PET, an optimal estimate of the radioactivity concentration is obtained from the measured emission data under certain criteria. So far, all the well-known statistical reconstruction algorithms require exactly known system probability matrix a priori, and the quality of such system model largely determines the quality of the reconstructed images. In this paper, we propose an algorithm for PET image reconstruction for the real world case where the PET system model is subject to uncertainties. The method counts PET reconstruction as a regularization problem and the image estimation is achieved by means of an uncertainty weighted least squares framework. The performance of our work is evaluated with the Shepp-Logan simulated and real phantom data, which demonstrates significant improvements in image quality over the least squares reconstruction efforts.
The GUM revision: the Bayesian view toward the expression of measurement uncertainty
Lira, I.
2016-03-01
The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) has been in use for more than 20 years, serving its purposes worldwide at all levels of metrology, from scientific to industrial and commercial applications. However, the GUM presents some inconsistencies, both internally and with respect to its two later Supplements. For this reason, the Joint Committee for Guides in Metrology, which is responsible for these documents, has decided that a major revision of the GUM is needed. This will be done by following the principles of Bayesian statistics, a concise summary of which is presented in this article. Those principles should be useful in physics and engineering laboratory courses that teach the fundamentals of data analysis and measurement uncertainty evaluation.
Santelices, Maria Veronica; Ugarte, Juan Jose; Flotts, Paulina; Radovic, Darinka; Kyllonen, Patrick
2011-01-01
This paper presents the development and initial validation of new measures of critical thinking and noncognitive attributes that were designed to supplement existing standardized tests used in the admissions system for higher education in Chile. The importance of various facets of this process, including the establishment of technical rigor and…
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Xu, Shenghua; Sun, Zhiwei
2010-05-18
The forward scattering light (FSL) received by the detector can cause uncertainties in turbidity measurement of the coagulation rate of colloidal dispersion, and this effect becomes more significant for large particles. In this study, the effect of FSL is investigated on the basis of calculations using the T-matrix method, an exact technique for the computation of nonspherical scattering. The theoretical formulation and relevant numerical implementation for predicting the contribution of FSL in the turbidity measurement is presented. To quantitatively estimate the degree of the influence of FSL, an influence ratio comparing the contribution of FSL to the pure transmitted light in the turbidity measurement is introduced. The influence ratios evaluated under various parametric conditions and the relevant analyses provide a guideline for properly choosing particle size, measuring wavelength to minimize the effect of FSL in turbidity measurement of coagulation rate.
Santos, T. Q.; Alvarenga, A. V.; Oliveira, D. P.; Mayworm, R. C.; Souza, R. M.; Costa-Félix, R. P. B.
2016-07-01
Speed of sound is an important quantity to characterize reference materials for ultrasonic applications, for instance. The alignment between the transducer and the test body is an key activity in order to perform reliable and consistent measurement. The aim of this work is to evaluate the influence of the alignment system to the expanded uncertainty of such measurement. A stainless steel cylinder was previously calibrated on an out of water system typically used for calibration of non-destructive blocks. Afterwards, the cylinder was calibrated underwater with two distinct alignment system: fixed and mobile. The values were statistically compared to the out-of-water measurement, considered the golden standard for such application. For both alignment systems, the normalized error was less than 0.8, leading to conclude that the both measurement system (under and out-of-water) do not diverge significantly. The gold standard uncertainty was 2.7 m-s-1, whilst the fixed underwater system resulted in 13 m-s-1, and the mobile alignment system achieved 6.6 m-s-1. After the validation of the underwater system for speed of sound measurement, it will be applied to certify Encapsulated Tissue Mimicking Material as a reference material for biotechnology application.
Theodorou, Dimitrios; Meligotsidou, Loukia; Karavoltsos, Sotirios; Burnetas, Apostolos; Dassenakis, Manos; Scoullos, Michael
2011-02-15
The propagation stage of uncertainty evaluation, known as the propagation of distributions, is in most cases approached by the GUM (Guide to the Expression of Uncertainty in Measurement) uncertainty framework which is based on the law of propagation of uncertainty assigned to various input quantities and the characterization of the measurand (output quantity) by a Gaussian or a t-distribution. Recently, a Supplement to the ISO-GUM was prepared by the JCGM (Joint Committee for Guides in Metrology). This Guide gives guidance on propagating probability distributions assigned to various input quantities through a numerical simulation (Monte Carlo Method) and determining a probability distribution for the measurand. In the present work the two approaches were used to estimate the uncertainty of the direct determination of cadmium in water by graphite furnace atomic absorption spectrometry (GFAAS). The expanded uncertainty results (at 95% confidence levels) obtained with the GUM Uncertainty Framework and the Monte Carlo Method at the concentration level of 3.01 μg/L were ±0.20 μg/L and ±0.18 μg/L, respectively. Thus, the GUM Uncertainty Framework slightly overestimates the overall uncertainty by 10%. Even after taking into account additional sources of uncertainty that the GUM Uncertainty Framework considers as negligible, the Monte Carlo gives again the same uncertainty result (±0.18 μg/L). The main source of this difference is the approximation used by the GUM Uncertainty Framework in estimating the standard uncertainty of the calibration curve produced by least squares regression. Although the GUM Uncertainty Framework proves to be adequate in this particular case, generally the Monte Carlo Method has features that avoid the assumptions and the limitations of the GUM Uncertainty Framework.
Sampogna, Francesca; Tabolli, Stefano; Giannantoni, Patrizia; Paradisi, Andrea; Abeni, Damiano
2016-01-01
Skin conditions often have a severe impact on the physical and psychosocial domains of patients' quality of life, but the relationship between these domains has been studied little. This study estimated the fraction of psychosocial burden that may be attributable to symptoms, using the Skindex-17 quality of life questionnaire (symptoms and psychosocial scales) in 2,487 outpatients. The excess proportion of psychosocial burden for each skin condition was computed. Overall, 79.8% of the psychosocial burden of patients with severe symptoms may be attributable to the symptoms. For patients with mild symptoms this figure is 49.7%. A great heterogeneity was observed, from -0.9% for patients with scars, up to more than 90% for conditions such as lichen planus and psoriasis. While these results will have to be confirmed in longitudinal studies, they seem to indicate that, by targeting specific symptoms, a substantial portion of the psychosocial burden of skin diseases could be spared. PMID:25766753
Energy Technology Data Exchange (ETDEWEB)
1978-07-10
Uncertainties with regard to many facets of repository site characterization have not yet been quantified. This report summarizes the state of knowledge of uncertainties in the measurement of porosity, hydraulic conductivity, and hydraulic gradient; uncertainties associated with various geophysical field techniques; and uncertainties associated with the effects of exploration and exploitation activities in bedded salt basins. The potential for seepage through a depository in bedded salt or shale is reviewed and, based upon the available data, generic values for the hydraulic conductivity and porosity of bedded salt and shale are proposed.
Retrievals and uncertainty analysis of aerosol single scattering albedo from MFRSR measurements
International Nuclear Information System (INIS)
Aerosol single scattering albedo (SSA) can be retrieved from the ratio of diffuse horizontal and direct normal fluxes measured from multifilter rotating shadowband radiometer (MFRSR). In this study, the measurement channels at 415 nm and 870 nm are selected for aerosol optical depth (AOD) and Angstrom coefficient retrievals, and the measurements at 415 nm are used for aerosol SSA retrievals with the constraint of retrieved Angstrom coefficient. We extensively assessed various issues impacting on the accuracy of SSA retrieval from measurements to input parameters and assumptions. For cloud-free days with mean aerosol loading of 0.13–0.60, our sensitivity study indicated that: (1) 1% calibration uncertainty can result in 0.8–3.7% changes in retrieved SSA; (2) without considering the cosine respond correction and/or forward scattering correction will result in underestimation of 1.1–3.3% and/or 0.73% in retrieved SSA; (3) an overestimation of 0.1 in asymmetry factor can result in an underestimation of 2.54–3.4% in retrieved SSA; (4) for small aerosol loading (e.g., 0.13), the uncertainty associated with the choice of Rayleigh optical depth value can result in non-negligible change in retrieved SSA (e.g., 0.015); (5) an uncertainty of 0.05 for surface albedo can result in changes of 1.49–5.4% in retrieved SSA. We applied the retrieval algorithm to the MFRSR measurements at the Atmospheric Radiation Measurements (ARM) Southern Great Plains (SGP) site. The retrieved results of AOD, Angstrom coefficient, and SSA are basically consistent with other independent measurements from co-located instruments at the site. - Highlights: • Aerosol SSA is derived from MFRSR measured diffuse to direct normal irradiance ratio. • We extensively assessed various issues impacting on the accuracy of SSA retrieval. • The issues are mainly from measurements and model input parameters and assumptions. • We applied the retrieval algorithm to the MFRSR measurements at ARM SGP
DEFF Research Database (Denmark)
Kolarik, Jakub; Olesen, Bjarne W.
2015-01-01
European Standard EN 15 251 in its current version does not provide any guidance on how to handle uncertainty of long term measurements of indoor environmental parameters used for classification of buildings. The objective of the study was to analyse the uncertainty for field measurements...... measurements of operative temperature at two measuring points (south/south-west and north/northeast orientation). Results of the present study suggest that measurement uncertainty needs to be considered during assessment of thermal environment in existing buildings. When expanded standard uncertainty was taken...... of operative temperature and evaluate its effect on categorization of thermal environment according to EN 15251. A data-set of field measurements of operative temperature four office buildings situated in Denmark, Italy and Spain was used. Data for each building included approx. one year of continuous...
DEFF Research Database (Denmark)
Hiller, Jochen; Reindl, Leonard M
2012-01-01
The knowledge of measurement uncertainty is of great importance in conformance testing in production. The tolerance limit for production must be reduced by the amounts of measurement uncertainty to ensure that the parts are in fact within the tolerance. Over the last 5 years, industrial X...... for estimating measurement uncertainties are briefly discussed. As we will show, the developed virtual CT (VCT) simulator can be adapted to various scanner systems, providing realistic CT data. Using the Monte Carlo method (MCM), measurement uncertainties for a given measuring task can be estimated, taking......-ray computed tomography (CT) has become an important technology for dimensional quality control. In this paper a computer simulation platform is presented which is able to investigate error sources in dimensional CT measurements. The typical workflow in industrial CT metrology is described and methods...
Directory of Open Access Journals (Sweden)
P. A. Solignac
2009-11-01
Full Text Available The use of scintillometers to determine sensible heat fluxes is now common in studies of land-atmosphere interactions. The main interest in these instruments is due to their ability to quantify energy distributions at the landscape scale, as they can calculate sensible heat flux values over long distances, in contrast to Eddy Covariance systems. However, scintillometer data do not provide a direct measure of sensible heat flux, but require additional data, such as the Bowen ratio (β, to provide flux values. The Bowen ratio can either be measured using Eddy Covariance systems or derived from the energy balance closure. In this work, specific requirements for estimating energy fluxes using a scintillometer were analyzed, as well as the accuracy of two flux calculation methods. We first focused on the classical method (used in standard softwares and we analysed the impact of the Bowen ratio on flux value and uncertainty. For instance, an averaged Bowen ratio (β of less than 1 proved to be a significant source of measurement uncertainty. An alternative method, called the "β-closure method", for which the Bowen ratio measurement is not necessary, was also tested. In this case, it was observed that even for low β values, flux uncertainties were reduced and scintillometer data were well correlated with the Eddy Covariance results. Besides, both methods should tend to the same results, but the second one slightly underestimates H while β decreases (<5%.
Directory of Open Access Journals (Sweden)
P. A. Solignac
2009-06-01
Full Text Available The use of scintillometers to determine sensible heat fluxes is now common in studies of land-atmosphere interactions. The main interest in these instruments is due to their ability to quantify energy distributions at the landscape scale, as they can calculate sensible heat flux values over long distances, in contrast to Eddy Correlation systems. However, scintillometer data do not provide a direct measure of sensible heat flux, but require additional data, such as the Bowen ratio (β, to provide flux values. The Bowen ratio can either be measured using Eddy Correlation systems or derived from the energy balance closure. In this work, specific requirements for estimating energy fluxes using a scintillometer were analyzed, as well as the accuracy of two flux calculation methods. We first focused on the classical method (used in standard software. We analysed the impact of the Bowen ratio according to both time averaging and ratio values; for instance, an averaged Bowen ratio (β of less than 1 proved to be a significant source of measurement uncertainty. An alternative method, called the "β-closure method", for which the Bowen ratio measurement is not necessary, was also tested. In this case, it was observed that even for low β values, flux uncertainties were reduced and scintillometer data were well correlated with the Eddy Correlation results.
Zhang, Shiqiang; Guo, Changsong
2007-01-01
Using the glucose and L-glutamic-acid to prepare the standard substance according to the ratio of 1:1, and the artificial seawater and the standard substance to prepare a series of standard solutions, the distribution pattern of uncertainty in measurement of seawater COD is obtained based on the measured results of the series of standard solutions by the potassium iodide-alkaline potassium permanganate determination method. The distribution pattern is as follows: Uncertainty in measurement is...
Energy Technology Data Exchange (ETDEWEB)
Soimakallio, S.
2012-08-15
Ambitious climate change mitigation requires the implementation of effective and equitable climate policy and GHG emission reduction measures. The objective of this study was to explore the significance of the uncertainties related to GHG emission reduction measures and policies by providing viewpoints on biofuels production, grid electricity consumption and differentiation of emission reduction commitments between countries and country groups. Life cycle assessment (LCA) and macro-level scenario analysis through top-down and bottom-up modelling and cost-effectiveness analysis (CEA) were used as methods. The uncertainties were propagated in a statistical way through parameter variation, scenario analysis and stochastic modelling. This study showed that, in determining GHG emissions at product or process level, there are significant uncertainties due to parameters such as nitrous oxide emissions from soil, soil carbon changes and emissions from electricity production; and due to methodological choices related to the spatial and temporal system boundary setting and selection of allocation methods. Furthermore, the uncertainties due to modelling may be of central importance. For example, when accounting for biomass-based carbon emissions to and sequestration from the atmosphere, consideration of the temporal dimension is critical. The outcomes in differentiation of GHG emission reduction commitments between countries and country groups are critically influenced by the quality of data and criteria applied. In both LCA and effort sharing, the major issues are equitable attribution of emissions and emission allowances on the one hand and capturing consequences of measures and policies on the other. As LCA and system level top-down and bottom-up modelling results are increasingly used to justify various decisions by different stakeholders such as policy-makers and consumers, harmonization of practices, transparency and the handling of uncertainties related to
Energy Technology Data Exchange (ETDEWEB)
Soimakallio, S.
2012-11-01
Ambitious climate change mitigation requires the implementation of effective and equitable climate policy and GHG emission reduction measures. The objective of this study was to explore the significance of the uncertainties related to GHG emission reduction measures and policies by providing viewpoints on biofuels production, grid electricity consumption and differentiation of emission reduction commitments between countries and country groups. Life cycle assessment (LCA) and macro-level scenario analysis through top-down and bottom-up modelling and cost-effectiveness analysis (CEA) were used as methods. The uncertainties were propagated in a statistical way through parameter variation, scenario analysis and stochastic modelling. This study showed that, in determining GHG emissions at product or process level, there are significant uncertainties due to parameters such as nitrous oxide emissions from soil, soil carbon changes and emissions from electricity production; and due to methodological choices related to the spatial and temporal system boundary setting and selection of allocation methods. Furthermore, the uncertainties due to modelling may be of central importance. For example, when accounting for biomass-based carbon emissions to and sequestration from the atmosphere, consideration of the temporal dimension is critical. The outcomes in differentiation of GHG emission reduction commitments between countries and country groups are critically influenced by the quality of data and criteria applied. In both LCA and effort sharing, the major issues are equitable attribution of emissions and emission allowances on the one hand and capturing consequences of measures and policies on the other. As LCA and system level top-down and bottom-up modelling results are increasingly used to justify various decisions by different stakeholders such as policy-makers and consumers, harmonization of practices, transparency and the handling of uncertainties related to
Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates
Directory of Open Access Journals (Sweden)
Atsushi Iizuka
2010-12-01
Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of
Solar Irradiances Measured using SPN1 Radiometers: Uncertainties and Clues for Development
Energy Technology Data Exchange (ETDEWEB)
Badosa, Jordi; Wood, John; Blanc, Philippe; Long, Charles N.; Vuilleumier, Laurent; Demengel, Dominique; Haeffelin, Martial
2014-12-08
The fast development of solar radiation and energy applications, such as photovoltaic and solar thermodynamic systems, has increased the need for solar radiation measurement and monitoring, not only for the global component but also the diffuse and direct. End users look for the best compromise between getting close to state-of-the-art measurements and keeping capital, maintenance and operating costs to a minimum. Among the existing commercial options, SPN1 is a relatively low cost solar radiometer that estimates global and diffuse solar irradiances from seven thermopile sensors under a shading mask and without moving parts. This work presents a comprehensive study of SPN1 accuracy and sources of uncertainty, which results from laboratory experiments, numerical modeling and comparison studies between measurements from this sensor and state-of-the art instruments for six diverse sites. Several clues are provided for improving the SPN1 accuracy and agreement with state-of-the-art measurements.
A First Look at the Impact of NNNLO Theory Uncertainties on Top Mass Measurements at the ILC
Simon, Frank
2016-01-01
A scan of the top production threshold at a future electron-positron collider provides the possibility for a precise measurement of the top quark mass in theoretically well-defined mass schemes. With statistical uncertainties of 20 MeV or below, systematics will likely dominate the total uncertainty of the measurement. This contribution presents a first look at the impact of the renormalization scale uncertainties in recent NNNLO calculations of the top pair production cross section in the threshold region on the measurement of the top quark mass at the International Linear Collider.
International Nuclear Information System (INIS)
The uncertainties in parton distribution functions (PDFs) are the dominant source of the systematic uncertainty in precision measurements of electroweak parameters at hadron colliders (e.g. sin2θeef(MZ), sin2θW = 1-MW2/MZ2 and the mass of the W boson). We show that measurements of the forward-backward charge asymmetry (AFB(M, y)) of Drell-Yan dilepton events produced at hadron colliders provide a new powerful tool to reduce the PDF uncertainties in these measurements. (orig.)
International target values 2000 for measurement uncertainties in safeguarding nuclear materials
International Nuclear Information System (INIS)
The IAEA has prepared a revised and updated version of International Target Values (ITVs) for uncertainty components in measurements of nuclear material. The ITVs represent uncertainties to be considered in judging the reliability of analytical techniques applied to industrial nuclear and fissile material subject to safeguards verification. The tabulated values represent estimates of the 'state of the practice' which ought to be achievable under routine conditions by adequately equipped, experienced laboratories. The ITVs 2000 are intended to be used by plant operators and safeguards organizations as a reference of the quality of measurements achievable in nuclear material accountancy, and for planning purposes. The IAEA prepared a draft of a technical report presenting the proposed ITVs 2000, and in April 2000 the chairmen or officers of the panels or organizations listed below were invited to co- author the report and to submit the draft to a discussion by their panels and organizations. Euratom Safeguards Inspectorate, ESAKDA Working Group on Destructive Analysis, ESARDA Working Group on Non Destructive Analysis, Institute of Nuclear Material Management, Japanese Expert Group on ITV-2000, ISO Working Group on Analyses in Spent Fuel Reprocessing, ISO Working Group on Analyses in Uranium Fuel Fabrication, ISO Working Group on Analyses in MOX Fuel Fabrication, Agencia Brasileno-Argentina de Contabilidad y Control de Materiales Nucleares (ABACC). Comments from the above groups were received and incorporated into the final version of the document, completed in April 2001. The ITVs 2000 represent target standard uncertainties, expressing the precision achievable under stipulated conditions. These conditions typically fall in one of the two following categories: 'repeatability conditions' normally encountered during the measurements done within one inspection period; or 'reproducibility conditions' involving additional sources of measurement variability such as
Measuring Young's Modulus the Easy Way, and Tracing the Effects of Measurement Uncertainties
Nunn, John
2015-01-01
The speed of sound in a solid is determined by the density and elasticity of the material. Young's modulus can therefore be calculated once the density and the speed of sound in the solid are measured. The density can be measured relatively easily, and the speed of sound through a rod can be measured very inexpensively by setting up a longitudinal…
DEFF Research Database (Denmark)
Rikhardsson, Pall; Sigurjonsson, Throstur Olaf; Arnardottir, Audur Arna
The use of performance measures and performance measurement frameworks has increased significantly in recent years. The type and variety of performance measures in use has been researched in various countries and linked to different variables such as the external environment, performance measurem......The use of performance measures and performance measurement frameworks has increased significantly in recent years. The type and variety of performance measures in use has been researched in various countries and linked to different variables such as the external environment, performance...... measurement frameworks, and management characteristics. This paper reports the results of a study carried out at year end 2013 of the use of performance measures by Icelandic companies and the links to perceived environmental uncertainty, management satisfaction with the performance measurement system...... and the perceived performance of the company. The sample was the 300 largest companies in Iceland and the response rate was 27%. Compared to other studies the majority of the respondents use a surprisingly high number of different measures – both financial and non-financial. This made testing of the three...
DEFF Research Database (Denmark)
Jacobsen, Finn
2011-01-01
The effect known as ‘weak Anderson localisation’, ‘coherent backscattering’ or ‘enhanced backscattering’ is a physical phenomenon that occurs in random systems, e.g., disordered media and linear wave systems, including reverberation rooms: the mean square response is increased at the drive point....... In a reverberation room this means that one can expect an increase of the reverberant sound field at the position of the source that generates the sound field. This affects the sound power output of the source and is therefore of practical concern. However, because of the stronger direct sound field...... implications for the uncertainty of sound power measurements....
Determination of Al in cake mix: Method validation and estimation of measurement uncertainty
Andrade, G.; Rocha, O.; Junqueira, R.
2016-07-01
An analytical method for the determination of Al in cake mix was developed. Acceptable values were obtained for the following parameters: linearity, detection limit - LOD (5.00 mg-kg-1) quantification limit - LOQ (12.5 mg-kg-1), the recovery assays values (between 91 and 102%), the relative standard deviation under repeatability and within-reproducibility conditions (<20.0%) and measurement uncertainty tests (<10.0%) The results of the validation process showed that the proposed method is fitness for purpose.
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
International Target Values 2010 for Measurement Uncertainties in Safeguarding Nuclear Materials
International Nuclear Information System (INIS)
This issue of the International Target Values (ITVs) represents the sixth revision, following the first release of such tables issued in 1979 by the ESARDA/WGDA. The ITVs are uncertainties to be considered in judging the reliability of analytical techniques applied to industrial nuclear and fissile material, which are subject to safeguards verification. The tabulated values represent estimates of the ‘state of the practice’ which should be achievable under routine measurement conditions. The most recent standard conventions in representing uncertainty have been considered, while maintaining a format that allows comparison with the previous releases of the ITVs. The present report explains why target values are needed, how the concept evolved and how they relate to the operator’s and inspector’s measurement systems. The ITVs-2010 are intended to be used by plant operators and safeguards organizations, as a reference of the quality of measurements achievable in nuclear material accountancy, and for planning purposes. The report suggests that the use of ITVs can be beneficial for statistical inferences regarding the significance of operator-inspector differences whenever valid performance values are not available.
Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng
2016-01-01
Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002
Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng
2016-01-01
Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback–Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002
Dobecki, Marek
2012-01-01
This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.
Kin Tekce, Buket; Tekce, Hikmet; Aktas, Gulali; Uyeturk, Ugur
2016-01-01
Uncertainty of measurement is the numeric expression of the errors associated with all measurements taken in clinical laboratories. Serum creatinine concentration is the most common diagnostic marker for acute kidney injury. The goal of this study was to determine the effect of the uncertainty of measurement of serum creatinine concentrations on the diagnosis of acute kidney injury. We calculated the uncertainty of measurement of serum creatinine according to the Nordtest Guide. Retrospectively, we identified 289 patients who were evaluated for acute kidney injury. Of the total patient pool, 233 were diagnosed with acute kidney injury using the AKIN classification scheme and then were compared using statistical analysis. We determined nine probabilities of the uncertainty of measurement of serum creatinine concentrations. There was a statistically significant difference in the number of patients diagnosed with acute kidney injury when uncertainty of measurement was taken into consideration (first probability compared to the fifth p = 0.023 and first probability compared to the ninth p = 0.012). We found that the uncertainty of measurement for serum creatinine concentrations was an important factor for correctly diagnosing acute kidney injury. In addition, based on the AKIN classification scheme, minimizing the total allowable error levels for serum creatinine concentrations is necessary for the accurate diagnosis of acute kidney injury by clinicians.
Institute of Scientific and Technical Information of China (English)
David L.Ortega; H.Holly Wang; Nicole J.Olynk Widmar
2014-01-01
This study provides an economics assessment of various food safety measures in China. A choice experiment approach is used to elicit Chinese consumer preferences for various food safety attributes using data from a 2008 urban consumer survey. An alternative welfare calculation is used to model aggregate market impacts of select food safety measures. Our results show that the largest welfare gains are found in the current government-run certiifcation program. The implementation of a third-party certiifcation system, a traceability network and a product label would generate signiifcant value and would help reduce current system inefifciencies in China. This study builds on previous research and provides an alternative approach for calculating consumer valuation of safety and quality attributes that can be used to estimate aggregate economic and welfare impacts.
Intercomparison of Climate Data Sets as a Measure of Observational Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Covey, C; Achuta Rao, K M; Fiorino, M; Gleckler, P J; Taylor, K E; Wehner, M F
2002-02-22
Uncertainties in climate observations are revealed when alternate observationally based data sets are compared. General circulation model-based ''reanalyses'' of meteorological observations will yield different results from different models, even if identical sets of raw unanalyzed data form their starting points. We have examined 25 longitude-latitude fields (including selected levels for three-dimensional quantities) encompassing atmospheric climate variables for which the PCMDI observational data base contains two or more high-quality sources. For the most part we compare ECMWF with NCEP reanalysis. In some cases, we compare in situ and/or satellite-derived data with reanalysis. To obtain an overview of the differences for all 25 fields, we use a graphical technique developed for climate model diagnosis: a ''portrait diagram'' displaying root-mean-square differences between the alternate data sources. With a few exceptions (arising from the requirement that RMS differences be normalized to accommodate different units of variables) the portrait diagrams indicate areas of agreement and disagreement that can be confirmed by examining traditional graphics such as zonal mean plots. In accord with conventional wisdom, the greatest agreement between alternate data sets--hence the smallest implied observational uncertainty--occurs for upper tropospheric zonal wind. We also find fairly good agreement between reanalysis and more direct measures of precipitation, suggesting that modern observational systems are resolving some long-standing problems with its measurement.
International Nuclear Information System (INIS)
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM10, PM2.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM10/PM2.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
Directory of Open Access Journals (Sweden)
Vesna Režić Dereani
2010-09-01
Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.
Force Measurement Services at Kebs: AN Overview of Equipment, Procedures and Uncertainty
Bangi, J. O.; Maranga, S. M.; Nganga, S. P.; Mutuli, S. M.
This paper describes the facilities, instrumentation and procedures currently used in the force laboratory at the Kenya Bureau of Standards (KEBS) for force measurement services. The laboratory uses the Force Calibration Machine (FCM) to calibrate force-measuring instruments. The FCM derives its traceability via comparisons using reference transfer force transducers calibrated by the Force Standard Machines (FSM) of a National Metrology Institute (NMI). The force laboratory is accredited to ISO/IEC 17025 by the Germany Accreditation Body (DAkkS). The accredited measurement scope of the laboratory is 1 MN to calibrate force transducers in both compression and tension modes. ISO 376 procedures are used while calibrating force transducers. The KEBS reference transfer standards have capacities of 10, 50, 300 and 1000 kN to cover the full range of the FCM. The uncertainty in the forces measured by the FCM were reviewed and determined in accordance to the new EURAMET calibration guide. The relative expanded uncertainty of force W realized by FCM was evaluated in a range from 10 kN-1 MN, and was found to be 5.0 × 10-4 with the coverage factor k being equal to 2. The overall normalized error (En) of the comparison results was also found to be less than 1. The accredited Calibration and Measurement Capability (CMC) of the KEBS force laboratory was based on the results of those intercomparisons. The FCM enables KEBS to provide traceability for the calibration of class ‘1’ force instruments as per the ISO 376.
Influence of Spherical Radiation Pattern Measurement Uncertainty on Handset Performance Measures
DEFF Research Database (Denmark)
Nielsen, Jesper Ødum; Pedersen, Gert Frølund
2005-01-01
An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS......), and mean effective gain (MEG) can be computed. Often this kind of measurements are made with a phantom head next to the handsets in order to simulate the influence of a real user. The measured radiation patterns are only expected to be repeatable if the same setup is used, i.e., the same phantom...... and the same mounting of the handset on the phantom. In this work the influence of mounting errors on the TRP, TIS, and MEG is investigated. Knowledge about the error due to incorrect mounting is necessary in determining requirements for both the mounting accuracy as well as for other parts of the measurement...
Changes in Handset Performance Measures due to Spherical Radiation Pattern Measurement Uncertainty
DEFF Research Database (Denmark)
Nielsen, Jesper Ødum; Pedersen, Gert Frølund
An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS......), and mean effective gain (MEG) can be computed. Often this kind of measurements are made with a phantom head next to the handsets in order to simulate the influence of a real user. The measured radiation patterns are only expected to be repeatable if the same setup is used, i.e., the same phantom...... and the same mounting of the handset on the phantom. In this work the influence of mounting errors on the TRP, TIS, and MEG is investigated. Knowledge about the error due to incorrect mounting is necessary in determining requirements for both the mounting accuracy as well as for other parts of the measurement...
Directory of Open Access Journals (Sweden)
Ahuja Tarushee
2011-04-01
Full Text Available Abstract Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG. In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2.
Studies of theoretical uncertainties on the measurement of the mass of the $W$ boson at the LHC
The ATLAS collaboration
2014-01-01
The measurement of the mass of the $W$ boson at the LHC requires control of both theoretical and experimental uncertainties. The extraction of the $W$-boson mass from the transverse momentum spectrum of electrons and muons produced in the leptonic decay of the $W$ boson, is likely to be limited by theoretical uncertainties. Uncertainties arising from the incomplete knowledge of the parton density functions and from the modelling of the low transverse momentum of the $W$ boson are estimated, accounting for the resolution effects of the ATLAS detector. Emphasis is given to the study of the physical origin of the uncertainties, so as to provide useful information for further reduction of the uncertainties.
Kang, K W; Pereda, M D; Canafoglia, M E; Bilmes, P; Llorente, C; Bonetto, R
2012-02-01
Pitting corrosion is a damage mechanism quite serious and dangerous in both carbon steel boiler tubes for power plants which are vital to most industries and stainless steels for orthopedic human implants whose demand, due to the increase of life expectation and rate of traffic accidents, has sharply increased. Reliable methods to characterize this kind of damage are becoming increasingly necessary, when trying to evaluate the advance of damage and to establish the best procedures for component inspection in order to determine remaining lives and failure mitigation. A study about the uncertainties on the topographies of corrosion pits from 3D SEM images, obtained at low magnifications (where errors are greater) and different stage tilt angles were carried out using an in-house software previously developed. Additionally, measurements of pit depths on biomaterial surfaces, subjected to two different surface treatments on stainless steels, were carried out. The different depth distributions observed were in agreement with electrochemical measurements.
The small sample uncertainty aspect in relation to bullwhip effect measurement
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2009-01-01
The bullwhip effect as a concept has been known for almost half a century starting with the Forrester effect. The bullwhip effect is observed in many supply chains, and it is generally accepted as a potential malice. Despite of this fact, the bullwhip effect still seems to be first and foremost a...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect. ...... a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...
$K$-corrections: an Examination of their Contribution to the Uncertainty of Luminosity Measurements
Lake, Sean E
2016-01-01
In this paper we provide formulae that can be used to determine the uncertainty contributed to a measurement by a $K$-correction and, thus, valuable information about which flux measurement will provide the most accurate $K$-corrected luminosity. All of this is done at the level of a Gaussian approximation of the statistics involved, that is, where the galaxies in question can be characterized by a mean spectral energy distribution (SED) and a covariance function (spectral 2-point function). This paper also includes approximations of the SED mean and covariance for galaxies, and the three common subclasses thereof, based on applying the templates from Assef et al. (2010) to the objects in zCOSMOS bright 10k (Lilly et al. 2009) and photometry of the same field from Capak et al. (2007), Sanders et al. (2007), and the AllWISE source catalog.
Gultepe, I.; Isaac, G. A.; Joe, P.; Kucera, P. A.; Theriault, J. M.; Fisico, T.
2014-01-01
The objective of this work is to better understand and summarize the mountain meteorological observations collected during the Science of Nowcasting Winter Weather for the Vancouver 2010 Olympics and Paralympics (SNOW-V10) project that was supported by the Fog Remote Sensing and Modeling (FRAM) project. The Roundhouse (RND) meteorological station was located 1,856 m above sea level that is subject to the winter extreme weather conditions. Below this site, there were three additional observation sites at 1,640, 1,320, and 774 m. These four stations provided some or all the following measurements at 1 min resolution: precipitation rate (PR) and amount, cloud/fog microphysics, 3D wind speed (horizontal wind speed, U h; vertical air velocity, w a), visibility (Vis), infrared (IR) and shortwave (SW) radiative fluxes, temperature ( T) and relative humidity with respect to water (RHw), and aerosol observations. In this work, comparisons are made to assess the uncertainties and variability for the measurements of Vis, RHw, T, PR, and wind for various winter weather conditions. The ground-based cloud imaging probe (GCIP) measurements of snow particles using a profiling microwave radiometer (PMWR) data have also been shown to assess the icing conditions. Overall, the conclusions suggest that uncertainties in the measurements of Vis, PR, T, and RH can be as large as 50, >60, 50, and >20 %, respectively, and these numbers may increase depending on U h, T, Vis, and PR magnitude. Variability of observations along the Whistler Mountain slope (~500 m) suggested that to verify the models, model space resolution should be better than 100 m and time scales better than 1 min. It is also concluded that differences between observed and model based parameters are strongly related to a model's capability of accurate prediction of liquid water content (LWC), PR, and RHw over complex topography.
Yuen, W.; Ma, Q.; Du, K.; Koloutsou-Vakakis, S.; Rood, M. J.
2015-12-01
Measurements of particulate matter (PM) emissions generated from fugitive sources are of interest in air pollution studies, since such emissions vary widely both spatially and temporally. This research focuses on determining the uncertainties in quantifying fugitive PM emission factors (EFs) generated from mobile vehicles using a vertical scanning micro-pulse lidar (MPL). The goal of this research is to identify the greatest sources of uncertainty of the applied lidar technique in determining fugitive PM EFs, and to recommend methods to reduce the uncertainties in this measurement. The MPL detects the PM plume generated by mobile fugitive sources that are carried downwind to the MPL's vertical scanning plane. Range-resolved MPL signals are measured, corrected, and converted to light extinction coefficients, through inversion of the lidar equation and calculation of the lidar ratio. In this research, both the near-end and far-end lidar equation inversion methods are considered. Range-resolved PM mass concentrations are then determined from the extinction coefficient measurements using the measured mass extinction efficiency (MEE) value, which is an intensive PM property. MEE is determined by collocated PM mass concentration and light extinction measurements, provided respectively by a DustTrak and an open-path laser transmissometer. These PM mass concentrations are then integrated with wind information, duration of plume event, and vehicle distance travelled to obtain fugitive PM EFs. To obtain the uncertainty of PM EFs, uncertainties in MPL signals, lidar ratio, MEE, and wind variation are considered. Error propagation method is applied to each of the above intermediate steps to aggregate uncertainty sources. Results include determination of uncertainties in each intermediate step, and comparison of uncertainties between the use of near-end and far-end lidar equation inversion methods.
Directory of Open Access Journals (Sweden)
J. K. Spiegel
2012-09-01
Full Text Available Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the error analysis of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100: first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of the Mie theory. We deduced error assumptions and proposed a new method on how to correct measured size distributions for these errors by redistributing the measured droplet size distribution using a stochastic approach. Second, based on a literature study, we summarized corrections for particle losses during sampling with the FM-100. We applied both corrections to cloud droplet size spectra measured at the high alpine site Jungfraujoch for a temperature range from 0 °C to 11 °C. We showed that Mie scattering led to spikes in the droplet size distributions using the default sizing procedure, while the new stochastic approach reproduced the ambient size distribution adequately. A detailed analysis of the FM-100 sampling efficiency revealed that particle losses were typically below 10% for droplet diameters up to 10 μm. For larger droplets, particle losses can increase up to 90% for the largest droplets of 50 μm at ambient wind speeds below 4.4 m s^{−1} and even to >90% for larger angles between the instrument orientation and the wind vector (sampling angle at higher wind speeds. Comparisons of the FM-100 to other reference instruments revealed that the total liquid water content (LWC measured by the FM-100 was more sensitive to particle losses than to re-sizing based on Mie scattering, while the total number concentration was only marginally influenced by particle losses. Consequently, for further LWC measurements with the FM-100 we strongly recommend to consider (1 the
Directory of Open Access Journals (Sweden)
J. K. Spiegel
2012-05-01
Full Text Available Droplet size spectra measurements are crucial to obtain a quantitative microphysical description of clouds and fog. However, cloud droplet size measurements are subject to various uncertainties. This work focuses on the evaluation of two key measurement uncertainties arising during cloud droplet size measurements with a conventional droplet size spectrometer (FM-100: first, we addressed the precision with which droplets can be sized with the FM-100 on the basis of Mie theory. We deduced error assumptions and proposed how to correct measured size distributions for these errors by redistributing the measured droplet size distribution using a stochastic approach. Second, based on a literature study, we derived corrections for particle losses during sampling with the FM-100. We applied both corrections to cloud droplet size spectra measured at the high alpine site Jungfraujoch for a temperature range from 0 °C to 11 °C. We show that Mie scattering led to spikes in the droplet size distributions using the default sizing procedure, while the stochastic approach reproduced the ambient size distribution adequately. A detailed analysis of the FM-100 sampling efficiency revealed that particle losses were typically below 10% for droplet diameters up to 10 μm. For larger droplets, particle losses can increase up to 90% for the largest droplets of 50 μm at ambient windspeeds below 4.4 m s^{−1} and even to >90% for larger angles between the instrument orientation and the wind vector (sampling angle at higher wind speeds. Comparisons of the FM-100 to other reference instruments revealed that the total liquid water content (LWC measured by the FM-100 was more sensitive to particle losses than to re-sizing based on Mie scattering, while the total number concentration was only marginally influenced by particle losses. As a consequence, for further LWC measurements with the FM-100 we strongly recommend to consider (1 the error arising due to Mie
Measurement of patient-derived utility values for periodontal health using a multi-attribute scale.
Bellamy, C A; Brickley, M R; McAndrew, R
1996-09-01
Periodontal health states are difficult to quantify and no formal scale quantifying patients' utilities for periodontal health states exits. Multi-attribute utility (MAU) techniques were used to develop such a scale. The MAU scale may be used to quantify patients' assessment of their current periodontal health and that of possible treatment outcomes. Such data, combined with probability values in formal decision analysis techniques would result in improved rationality of treatment planning for periodontal disease. 20 patients attending for routine undergraduate care were interviewed. Data from these interviews were sorted into groups of common interest (domains). Intra-domain health statements were complied from the interview content. 21 patients ranked the intra-domain statements on a scale of 0-100. This same group of patients also performed an inter-domain weighting. Mean results showed that patients were 2X as concerned with how they felt and with the prognosis of possible outcomes, than with how they looked and what facts they knew about their oral health. However, the real value of utilities research lies in application of individual results to treatment planning as there is a wide range of opinion regarding outcome health states. PMID:8891929
Thermal inactivation of human norovirus surrogates in spinach and measurement of its uncertainty.
Bozkurt, Hayriye; D'souza, Doris H; Davidson, P Michael
2014-02-01
Leafy greens, including spinach, have potential for human norovirus transmission through improper handling and/or contact with contaminated water. Inactivation of norovirus prior to consumption is essential to protect public health. Because of the inability to propagate human noroviruses in vitro, murine norovirus (MNV-1) and feline calicivirus (FCV-F9) have been used as surrogates to model human norovirus behavior under laboratory conditions. The objectives of this study were to determine thermal inactivation kinetics of MNV-1 and FCV-F9 in spinach, compare first-order and Weibull models, and measure the uncertainty associated with the process. D-values were determined for viruses at 50, 56, 60, 65, and 72 °C in 2-ml vials. The D-values calculated from the first-order model (50 to 72 °C) ranged from 0.16 to 14.57 min for MNV-1 and 0.15 to 17.39 min for FCV-9. Using the Weibull model, the tD for MNV-1 and FCV-F9 to destroy 1 log (D ≈ 1) at the same temperatures ranged from 0.22 to 15.26 and 0.27 to 20.71 min, respectively. The z-values determined for MNV-1 were 11.66 ± 0.42 °C using the Weibull model and 10.98 ± 0.58 °C for the first-order model and for FCV-F9 were 10.85 ± 0.67 °C and 9.89 ± 0.79 °C, respectively. There was no difference in D- or z-value using the two models (P > 0.05). Relative uncertainty for dilution factor, personal counting, and test volume were 0.005, 0.0004, and ca. 0.84%, respectively. The major contribution to total uncertainty was from the model selected. Total uncertainties for FCV-F9 for the Weibull and first-order models were 3.53 to 7.56% and 11.99 to 21.01%, respectively, and for MNV-1, 3.10 to 7.01% and 13.14 to 16.94%, respectively. Novel and precise information on thermal inactivation of human norovirus surrogates in spinach was generated, enabling more reliable thermal process calculations to control noroviruses. The results of this study may be useful to the frozen food industry in designing blanching processes for
Instruments of Measurement Uncertainty Analysis Files%仪器分档不确定度的测算分析
Institute of Scientific and Technical Information of China (English)
张志清; 张阳春
2014-01-01
测量不确定度的应用范围很广，当用同样的方法和仪器测量同一对象，由于测量仪器选择档位的不同，则提供不同的测量误差、分辨率，测算出的不确定度评定结果也就不同，本文以数字万用表为例进行了分档不确定度的测算分析。%Measurement uncertainty wide range of applications, when using the same methods and instruments to measure the same object, because the selected range of different measurement instruments, then provide different measurement error, resolution, a measurement uncertainty of assessment results wil different paper to digital multimeter as an example to measure the uncertainty sub-ifle analysis.
Saviano, Alessandro Morais; Francisco, Fabiane Lacerda; Lourenço, Felipe Rebello
2014-09-01
The aim of this work was to develop and validate a new microbiological assay to determine potency of linezolid in injectable solution. 2(4) factorial and central composite designs were used to optimize the microbiological assay conditions. In addition, we estimated the measurement uncertainty based on residual error of analysis of variance of inhibition zone diameters. Optimized conditions employed 4 mL of antibiotic 1 medium inoculated with 1% of Staphylococcus aureus suspension, and linezolid in concentrations from 25 to 100 µg mL(-1). The method was specific, linear (Y=10.03X+5.00 and Y=9.20X+6.53, r(2)=0.9950 and 0.9987, for standard and sample curves, respectively), accurate (mean recovery=102.7%), precise (repeatability=2.0% and intermediate precision=1.9%) and robust. Microbiological assay׳s overall uncertainty (3.1%) was comparable to those obtained for other microbiological assays (1.7-7.1%) and for determination of linezolid by spectrophotometry (2.1%) and reverse-phase ultra-performance liquid chromatography (RP-UPLC) (2.5%). Therefore, it is an acceptable alternative method for the routine quality control of linezolid in injectable solution.
Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty.
Hoque, Yamen M; Tripathi, Shivam; Hantush, Mohamed M; Govindaraju, Rao S
2016-03-01
Risk-based measures such as reliability, resilience, and vulnerability (R-R-V) have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using a relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. This data-driven approach to calculating aggregate R-R-V values was found to be useful for providing a composite picture of watershed health. Aggregate R-R-V values also enabled comparison between locations with different types of WQ data. PMID:27065419
Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo
2014-12-01
Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to estuarine waters. It should be possible to use the approach in oceanic or fresh water environments as well. The possibility of individual quality control will allow mass-specific optical properties to be determined with better resolved uncertainties and, hence, lower statistical variability, greatly improving our capability to model inherent optical properties of natural particles and its natural variability, e.g. dependence on particle size and the complex refractive index.
Medina, Juan Camilo
This dissertation offers computational and theoretical advances for optimization under uncertainty problems that utilize a probabilistic framework for addressing such uncertainties, and adopt a probabilistic performance as objective function. Emphasis is placed on applications that involve potentially complex numerical and probability models. A generalized approach is adopted, treating the system model as a "black-box" and relying on stochastic simulation for evaluating the probabilistic performance. This approach can impose, though, an elevated computational cost, and two of the advances offered in this dissertation aim at decreasing the computational burden associated with stochastic simulation when integrated with optimization applications. The first one develops an adaptive implementation of importance sampling (a popular variance reduction technique) by sharing information across the iterations of the numerical optimization algorithm. The system model evaluations from the current iteration are utilized to formulate importance sampling densities for subsequent iterations with only a small additional computational effort. The characteristics of these densities as well as the specific model parameters these densities span are explicitly optimized. The second advancement focuses on adaptive tuning of a kriging metamodel to replace the computationally intensive system model. A novel implementation is considered, establishing a metamodel with respect to both the uncertain model parameters as well as the design variables, offering significant computational savings. Additionally, the adaptive selection of certain characteristics of the metamodel, such as support points or order of basis functions, is considered by utilizing readily available information from the previous iteration of the optimization algorithm. The third advancement extends to a different application and considers the assessment of the appropriateness of different candidate robust designs. A novel
International target values 2000 for measurement uncertainties in safeguarding nuclear materials
International Nuclear Information System (INIS)
The IAEA has published a revised and updated version of International Target Values (ITVs) for uncertainty components in measurements of nuclear material. This represents the fifth revision of the original release of such tables issued in 1979 by the ESARDA/WGDA. The ITVs represent uncertainties to be considered in judging the reliability of analytical techniques applied to industrial nuclear and fissile material subject to safeguards verification. The tabulated values represent estimates of the 'state of the practice' which ought to be achievable under routine conditions by adequately equipped, experienced laboratories. The most recent standard conventions in representing uncertainty and reliability data have been taken into account, while maintaining a format which allows comparison to previous releases of ITVs. The ITVs 2000 are intended to be used by plant operators and safeguards organizations as a reference of the quality of measurements achievable in nuclear material accountancy, and for planning purposes. They may also be used for statistical inferences regarding the significance of operator-inspector differences whenever insufficient measurement data is available. The IAEA prepared a draft of a technical report presenting the proposed ITVs 2000, and in April 2000 the chairmen or officers of the panels or organizations listed below were invited to co-author the report and to submit the draft to a discussion by their panels and organizations. Comments from the following groups were received and incorporated into the final version of the document, completed in April 2001. The final report replaces the 1993 version of the Target Values, STR 294: Euratom Safeguards Inspectorate, ESARDA Working Group on Destructive Analysis, ESARDA Working Group on Non Destructive Analysis, Institute of Nuclear Material Management, Japanese Expert Group on ITV-2000, ISO Working Group on Analyses in Spent Fuel Reprocessing, ISO Working Group on Analyses in Uranium Fuel
Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allbrooke, Benedict; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Å sman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Aubert, Bernard; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Beale, Steven; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Sebastian; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertella, Claudia; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borri, Marcello; Borroni, Sara; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Caminada, Lea Michaela; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cuthbert, Cameron; Cwetanski, Peter; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; de Mora, Lee; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Delemontex, Thomas; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Felix; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Bø rge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Hall, David; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jø rgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hawkins, Donovan; Hayakawa, Takashi; Hayashi, Takayasu; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Hernandez, Carlos Medina; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huettmann, Antje; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ilic, Nikolina; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jansen, Hendrik; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Jovicevic, Jelena; Jovin, Tatjana; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Juranek, Vojtech; Jussel, Patrick; Juste Rozas, Aurelio; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Lawrence; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kraus, Jana; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Laurelli, Paolo; Lavorini, Vincenzo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lemmer, Boris; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manhaes de Andrade Filho, Luciano; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martin-Haugh, Stewart; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Morin, Jerome; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Mueller, Timo; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Narayan, Rohin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakes, Louise Beth; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olcese, Marco; Olchevski, Alexander; Olivares Pino, Sebastian Andres; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Paleari, Chiara; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penning, Bjoern; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Peters, Krisztian; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Joe; Price, Lawrence; Price, Michael John; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przybycien, Mariusz; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Radloff, Peter; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rave, Tobias Christian; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romanov, Victor; Romeo, Gaston; Romero Adam, Elena; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Christian; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sanchez, Arturo; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultens, Martin Johannes; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Scott, Bill; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shiyakova, Maria; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Smakhtin, Vladimir; Smart, Ben; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soualah, Rachik; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Styles, Nicholas Adam; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Suzuki, Yuta; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teinturier, Marthe; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tian, Feng; Tibbetts, Mark James; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vanadia, Marco; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Virzi, Joseph; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendland, Dennis; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Wei-Cheng; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wraight, Kenneth; Wright, Catherine; Wright, Michael; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zabinski, Bartlomiej; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zeman, Martin; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz
2013-01-01
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of $\\sqrt{s}$ = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K_s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5% for central isolated hadrons and 1-3% for the final calorimeter jet energy scale.
Aad, G.; Abbott, B.; Abdallah, J.; Abdelalim, A. A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Acerbi, E.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Addy, T. N.; Adelman, J.; Aderholz, M.; Adomeit, S.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J. A.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahsan, M.; Aielli, G.; Akdogan, T.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Akiyama, A.; Alam, M. S.; Alam, M. A.; Albert, J.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allbrooke, B. M. M.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral, P.; Amelung, C.; Ammosov, V. V.; Amorim, A.; Amorós, G.; Amram, N.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Andrieux, M.-L.; Anduaga, X. S.; Angerami, A.; Anghinolfi, F.; Anisenkov, A.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoun, S.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Arfaoui, S.; Arguin, J.-F.; Arik, E.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Artoni, G.; Arutinov, D.; Asai, S.; Asfandiyarov, R.; Ask, S.; Åsman, B.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Aubert, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baccaglioni, G.; Bacci, C.; Bach, A. M.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Badescu, E.; Bagnaia, P.; Bahinipati, S.; Bai, Y.; Bailey, D. C.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, M. D.; Baker, S.; Banas, E.; Banerjee, P.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barashkou, A.; Barbaro Galtieri, A.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Bardin, D. Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Barrillon, P.; Bartoldus, R.; Barton, A. E.; Bartsch, V.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beale, S.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, S.; Beckingham, M.; Becks, K. H.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Begel, M.; Behar Harpaz, S.; Behera, P. K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Beloborodova, O.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Benchouk, C.; Bendel, M.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertella, C.; Bertin, A.; Bertinelli, F.; Bertolucci, F.; Besana, M. I.; Besson, N.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K. M.; Blair, R. E.; Blanchard, J.-B.; Blanchot, G.; Blazek, T.; Blocker, C.; Blocki, J.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. B.; Bocchetta, S. S.; Bocci, A.; Boddy, C. R.; Boehler, M.; Boek, J.; Boelaert, N.; Bogaerts, J. A.; Bogdanchikov, A.; Bogouch, A.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Bolnet, N. M.; Bona, M.; Bondarenko, V. G.; Bondioli, M.; Boonekamp, M.; Booth, C. N.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borri, M.; Borroni, S.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Botterill, D.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozhko, N. I.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G. W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brodbeck, T. J.; Brodet, E.; Broggi, F.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, W. K.; Brown, G.; Brown, H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.
2013-03-01
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of sqrt{s} = 900 {GeV} and 7 TeV collected during 2009 and 2010. Then, using the decay of K s and Λ particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.
de Hipt, Felix Op; Diekkrüger, Bernd; Steup, Gero; Rode, Michael
2016-04-01
Water-driven soil erosion, transport and deposition take place on different spatial and temporal scales. Therefore, related measurements are complex and require process understanding and a multi-method approach combining different measurement methods with soil erosion modeling. Turbidity as a surrogate measurement for suspended sediment concentration (SSC) in rivers is frequently used to overcome the disadvantages of conventional sediment measurement techniques regarding temporal resolution and continuity. The use of turbidity measurements requires a close correlation between turbidity and SSC. Depending on the number of samples collected, the measured range and the variations in the measurements, SSC-turbidity curves are subject to uncertainty. This uncertainty has to be determined in order to assess the reliability of measure-ments used to quantify catchment sediment yields and to calibrate soil erosion models. This study presents the calibration results from a sub-humid catchment in south-western Burkina Faso and investigates the related uncertainties. Daily in situ measurements of SSC manually collected at one turbidity station and the corresponding turbidity readings are used to obtain the site-specific calibration curve. The discharge is calculated based on an empirical water level-discharge relationship. The derived regression equations are used to define prediction intervals for SSC and discharge. The uncertainty of the suspended sediment load time series is influenced by the corresponding uncertainties of SSC and discharge. This study shows that the determination of uncertainty is relevant when turbidity-based measurements of suspended sediment loads are used to quantify catchment erosion and to calibrate erosion models.
International Nuclear Information System (INIS)
We have analysed the uncertainty of a measured electron density using a wave cutoff probe and compared it with that obtained using a double Langmuir probe and plasma oscillation probe. The wave cutoff probe gives an electron density from a measured plasma frequency, using a network analyser and radiating and detecting antennae. It can also measure the spatial distribution of the electron density. The cutoff method is free of many difficulties often encountered with Langmuir probes, such as thin film deposition and plasma potential fluctuation, and the uncertainty of the cutoff probe is not affected by the complex plasma environment. Here, the measurement technique is theoretically analysed and experimentally demonstrated in density measurements of an inductively coupled radio frequency plasma, and a comparison with the double probe and a plasma oscillation method with uncertainty analysis is also made. (authors)
Energy Technology Data Exchange (ETDEWEB)
Jung-Hyung, Kim; Kwang-Hwa, Chung; Yong-Hyeon, Shin [Korea Research Inst. of Standards and Science, Center for Vacuum Technology, Daejeon (Korea, Republic of)
2005-04-01
We have analysed the uncertainty of a measured electron density using a wave cutoff probe and compared it with that obtained using a double Langmuir probe and plasma oscillation probe. The wave cutoff probe gives an electron density from a measured plasma frequency, using a network analyser and radiating and detecting antennae. It can also measure the spatial distribution of the electron density. The cutoff method is free of many difficulties often encountered with Langmuir probes, such as thin film deposition and plasma potential fluctuation, and the uncertainty of the cutoff probe is not affected by the complex plasma environment. Here, the measurement technique is theoretically analysed and experimentally demonstrated in density measurements of an inductively coupled radio frequency plasma, and a comparison with the double probe and a plasma oscillation method with uncertainty analysis is also made. (authors)
Directory of Open Access Journals (Sweden)
Lee Lok-chun
2010-11-01
Full Text Available Abstract Background The effects of the built environment on walking in seniors have not been studied in an Asian context. To examine these effects, valid and reliable measures are needed. The aim of this study was to develop and validate a questionnaire of perceived neighborhood characteristics related to walking appropriate for Chinese seniors (Neighborhood Environment Walkability Scale for Chinese Seniors, NEWS-CS. It was based on the Neighborhood Environment Walkability Scale - Abbreviated (NEWS-A, a validated measure of perceived built environment developed in the USA for adults. A secondary study aim was to establish the generalizability of the NEWS-A to an Asian high-density urban context and a different age group. Methods A multidisciplinary panel of experts adapted the original NEWS-A to reflect the built environment of Hong Kong and needs of seniors. The translated instrument was pre-tested on a sample of 50 Chinese-speaking senior residents (65+ years. The final version of the NEWS-CS was interviewer-administered to 484 seniors residing in four selected Hong Kong districts varying in walkability and socio-economic status. Ninety-two participants completed the questionnaire on two separate occasions, 2-3 weeks apart. Test-rest reliability indices were estimated for each item and subscale of the NEWS-CS. Confirmatory factor analysis was used to develop the measurement model of the NEWS-CS and cross-validate that of the NEWS-A. Results The final version of the NEWS-CS consisted of 14 subscales and four single items (76 items. Test-retest reliability was moderate to good (ICC > 50 or % agreement > 60 except for four items measuring distance to destinations. The originally-proposed measurement models of the NEWS-A and NEWS-CS required 2-3 theoretically-justifiable modifications to fit the data well. Conclusions The NEWS-CS possesses sufficient levels of reliability and factorial validity to be used for measuring perceived neighborhood
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
Comment on ‘A low-uncertainty measurement of the Boltzmann constant’
Macnaughton, Donald B.
2016-02-01
The International Committee for Weights and Measures has projected a major revision of the International System of Units in which all the base units will be defined by fixing the values of certain fundamental constants of nature. To assist, de Podesta et al recently experimentally obtained a precise new estimate of the Boltzmann constant. This estimate is proposed as a basis for the redefinition of the unit of temperature, the kelvin. The present paper reports a reanalysis of de Podesta et al’s data that reveals systematic non-random patterns in the residuals of the key fitted model equation. These patterns violate the assumptions underlying the analysis and thus they raise questions about the validity of de Podesta et al’s estimate of the Boltzmann constant. An approach is discussed to address these issues, which should lead to an accurate estimate of the Boltzmann constant with a lower uncertainty.
The ellipsoidal nested sampling and the expression of the model uncertainty in measurements
Gervino, Gianpiero; Mana, Giovanni; Palmisano, Carlo
2016-07-01
In this paper, we consider the problems of identifying the most appropriate model for a given physical system and of assessing the model contribution to the measurement uncertainty. The above problems are studied in terms of Bayesian model selection and model averaging. As the evaluation of the “evidence” Z, i.e., the integral of Likelihood × Prior over the space of the measurand and the parameters, becomes impracticable when this space has 20 ÷ 30 dimensions, it is necessary to consider an appropriate numerical strategy. Among the many algorithms for calculating Z, we have investigated the ellipsoidal nested sampling, which is a technique based on three pillars: The study of the iso-likelihood contour lines of the integrand, a probabilistic estimate of the volume of the parameter space contained within the iso-likelihood contours and the random samplings from hyperellipsoids embedded in the integration variables. This paper lays out the essential ideas of this approach.
Hujun He; Yaning Zhao; Xingke Yang; Yaning Gao; Xu Wu
2015-01-01
Analyzing showed that the safety risk evaluation for CO2 geological storage had important significance. Aimed at the characteristics of CO2 geological storage safety risk evaluation, drawing on previous research results, rank and order models for safety risk evaluation of CO2 geological storage were put forward based on information entropy and uncertainty measure theory. In this model, the uncertainty problems in safety risk evaluation of CO2 geological storage were solved by qualitative anal...
Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data
Helm, Björn; Schiffner, Stefanie; Krebs, Peter
2014-05-01
Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types......Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
New Measurement Method and Uncertainty Estimation for Plate Dimensions and Surface Quality
Directory of Open Access Journals (Sweden)
Salah H. R. Ali
2013-01-01
Full Text Available Dimensional and surface quality for plate production control is facing difficult engineering challenges. One of these challenges is that plates in large-scale mass production contain geometric uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007, and TIS 2398-2: 2008. A proposed measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of new method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results for centre of curvature (CC, centre of edge (CE, warpage (W, and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results not only showed better accurate new method but also improved the quality of plate products significantly.
Zhu, Ming-Liang; Zhang, Qing-Hang; Lupton, Colin; Tong, Jie
2016-04-01
The measurement uncertainty of strains has been assessed in a bone analogue (sawbone), bovine trabecular bone and bone-cement interface specimens under zero load using the Digital Volume Correlation (DVC) method. The effects of sub-volume size, sample constraint and preload on the measured strain uncertainty have been examined. There is generally a trade-off between the measurement uncertainty and the spatial resolution. Suitable sub-volume sizes have been be selected based on a compromise between the measurement uncertainty and the spatial resolution of the cases considered. A ratio of sub-volume size to a microstructure characteristic (Tb.Sp) was introduced to reflect a suitable spatial resolution, and the measurement uncertainty associated was assessed. Specifically, ratios between 1.6 and 4 appear to give rise to standard deviations in the measured strains between 166 and 620 με in all the cases considered, which would seem to suffice for strain analysis in pre as well as post yield loading regimes. A microscale finite element (μFE) model was built from the CT images of the sawbone, and the results from the μFE model and a continuum FE model were compared with those from the DVC. The strain results were found to differ significantly between the two methods at tissue level, consistent in trend with the results found in human bones, indicating mainly a limitation of the current DVC method in mapping strains at this level.
Zhu, Ming-Liang; Zhang, Qing-Hang; Lupton, Colin; Tong, Jie
2016-04-01
The measurement uncertainty of strains has been assessed in a bone analogue (sawbone), bovine trabecular bone and bone-cement interface specimens under zero load using the Digital Volume Correlation (DVC) method. The effects of sub-volume size, sample constraint and preload on the measured strain uncertainty have been examined. There is generally a trade-off between the measurement uncertainty and the spatial resolution. Suitable sub-volume sizes have been be selected based on a compromise between the measurement uncertainty and the spatial resolution of the cases considered. A ratio of sub-volume size to a microstructure characteristic (Tb.Sp) was introduced to reflect a suitable spatial resolution, and the measurement uncertainty associated was assessed. Specifically, ratios between 1.6 and 4 appear to give rise to standard deviations in the measured strains between 166 and 620 με in all the cases considered, which would seem to suffice for strain analysis in pre as well as post yield loading regimes. A microscale finite element (μFE) model was built from the CT images of the sawbone, and the results from the μFE model and a continuum FE model were compared with those from the DVC. The strain results were found to differ significantly between the two methods at tissue level, consistent in trend with the results found in human bones, indicating mainly a limitation of the current DVC method in mapping strains at this level. PMID:26741534
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Cantatore, Angela;
2012-01-01
measurement results using different measuring strategies applied in different inspection software packages for volume and surface data analysis. The strategy influence is determined by calculating the measurement uncertainty. This investigation includes measurements of two industrial items, an aluminium pipe...... in smaller systematic errors compared to distance and height measurements. It was found that uncertainties of all measurands evaluated on surface data were generally greater compared to measurements performed on volume data....... connector and a plastic toggle, a hearing aid component. These are measured using a commercial CT scanner. Traceability is transferred using tactile and optical coordinate measuring machines, which are used to produce reference measurements. Results show that measurements of diameter for both parts resulted...
Ramsey, Michael H; Geelhoed, Bastiaan; Wood, Roger; Damant, Andrew P
2011-04-01
A realistic estimate of the uncertainty of a measurement result is essential for its reliable interpretation. Recent methods for such estimation include the contribution to uncertainty from the sampling process, but they only include the random and not the systematic effects. Sampling Proficiency Tests (SPTs) have been used previously to assess the performance of samplers, but the results can also be used to evaluate measurement uncertainty, including the systematic effects. A new SPT conducted on the determination of moisture in fresh butter is used to exemplify how SPT results can be used not only to score samplers but also to estimate uncertainty. The comparison between uncertainty evaluated within- and between-samplers is used to demonstrate that sampling bias is causing the estimates of expanded relative uncertainty to rise by over a factor of two (from 0.39% to 0.87%) in this case. General criteria are given for the experimental design and the sampling target that are required to apply this approach to measurements on any material.
Institute of Scientific and Technical Information of China (English)
Huan Zhang; Xiao-Xi Duan; Chen Zhang; Hao Liu; Hui-Ge Zhang; Quan-Xi Xue; Qing Ye
2016-01-01
One of the most challenging tasks in the laser-driven Hugoniot experiment is how to increase the reproducibility and precision of the experimental data to meet the stringent requirement in validating equation of state models.In such cases,the contribution of intrinsic uncertainty becomes important and cannot be ignored.A detailed analysis of the intrinsic uncertainty of the aluminum-iron impedance-match experiment based on the measurement of velocities is presented.The influence of mirror-reflection approximation on the shocked pressure of Fe and intrinsic uncertainties from the equation of state uncertainty of standard material are quantified.Furthermore,the comparison of intrinsic uncertainties of four different experimental approaches is presented.It is shown that,compared with other approaches including the most widely used approach which relies on the measurements of the shock velocities of Al and Fe,the approach which relies on the measurement of the particle velocity of Al and the shock velocity of Fe has the smallest intrinsic uncertainty,which would promote such work to significantly improve the diagnostics precision in such an approach.
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over
Uncertainty Measurement and Visual Analysis on Terroristic Attacks Data%恐怖袭击事件不确定性度量及可视分析
Institute of Scientific and Technical Information of China (English)
贺怀清; 王赫
2012-01-01
In recent years,terroristic activities occur more frequently and have seriously affected the regional stability and the world peace. With the development of information technology, the researchers are able to obtain information of terroristic attacks from many aspects. However, with the constant enlargement of the scale of data sets,how to explore the underlying information and analyze the uncertainty from a large number of data has become an important issue in the analysis process of terroristic attacks. On Global Terrorism Database,based on visual analysis and uncertainty measurement theory,we propose the measurement and visual analysis methods on data records and uncertainty of attributes. By integrating results of uncertainty measurement with parallel coordinates, histogram, area chart and interactive methods,the data uncertainty is clearly displayed without influence on its representation and provides information base for situation assessment based on uncertainty theory the next step.%近年来,全球范围内恐怖主义活动愈发频繁,已经严重影响了地区稳定和世界和平.随着信息技术的发展,研究者们得以从多个方面获取恐怖袭击事件信息.然而,随着数据集规模的不断扩大,如何从大量数据中发掘隐含的信息、分析其中包含的不确定性,成为恐怖袭击事件分析过程中的重要问题.针对全球恐怖主义数据库,基于可视分析和不确定度量理论,提出了数据记录和属性不确定性的度量及可视分析方法.通过将不确定性度量结果与平行坐标、柱状图、面积图和交互式方法相结合,在不影响数据源表达的同时清晰地展示了其中包含的不确定性,为下一步基于不确定性理论的态势评估提供了信息基础.
Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck
2010-05-01
The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.
Gruber, Matthew A; Hartogensis, Oscar K
2013-01-01
Scintillometers measure $C_n^2$ over large areas of turbulence in the atmospheric surface layer. Turbulent fluxes of heat and momentum are inferred through coupled sets of equations derived from the Monin-Obukhov similarity hypothesis. One-dimensional sensitivity functions have been produced which relate the sensitivity of heat fluxes to uncertainties in single values of beam height over homogeneous and flat terrain. Real field sites include variable topography and heterogeneous surface properties such as roughness length. We develop here the first analysis of the sensitivity of scintillometer derived sensible heat fluxes to uncertainties in spacially distributed topographic measurements. For large-aperture scintillometers and independent $u_\\star$ measurements, sensitivity is shown to be concentrated in areas near the center of the beam and where the underlying topography is closest to the beam height. Uncertainty may be greatly reduced by focusing precise topographic measurements in these areas. The new two...
Energy Technology Data Exchange (ETDEWEB)
Dias, Fabio C., E-mail: fabio@ird.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Almeida, Silvio G. de; Renha Junior, Geraldo, E-mail: silvio@abacc.org.b, E-mail: grenha@abacc.org.b [Agencia Brasileiro-Argentina de Contabilidade e Controle de Materiais Nucleares (ABACC), Rio de Janeiro, RJ (Brazil)
2011-07-01
The International Target Values (ITVs) are reasonable uncertainty estimates that can be used in judging the reliability of measurement techniques applied to industrial nuclear and fissile materials subject to accountancy and/or safeguards verification. In the absence of relevant experimental estimates, ITVs can also be used to select measurement techniques and calculate sample population during the planning phase of verification activities. It is important to note that ITVs represent estimates of the 'state-of-the-practice', which should be achievable under routine measurement conditions affecting both facility operators and safeguards inspectors, not only in the field, but also in laboratory. Tabulated values cover measurement methods used for the determination of volume or mass of the nuclear material, for its elemental and isotopic assays, and for its sampling. The 2010 edition represents the sixth revision of the International Target Values (ITVs), issued by the International Atomic Energy Agency (IAEA) as a Safeguards Technical Report (STR-368). The first version was released as 'Target Values' in 1979 by the Working Group on Techniques and Standards for Destructive Analysis (WGDA) of the European Safeguards Research and Development Association (ESARDA) and focused on destructive analytical methods. In the latest 2010 revision, international standards in estimating and expressing uncertainties have been considered while maintaining a format that allows comparison with the previous editions of the ITVs. Those standards have been usually applied in QC/QA programmes, as well as qualification of methods, techniques and instruments. Representatives of the Brazilian Nuclear Energy Commission (CNEN) and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC) participated in previous Consultants Group Meetings since the one convened to establish the first list of ITVs released in 1993 and in subsequent revisions
Yanez Rausell, L.; Malenovsky, Z.; Clevers, J.G.P.W.; Schaepman, M.E.
2014-01-01
We present uncertainties associated with the measurement of coniferous needle-leaf optical properties (OPs) with an integrating sphere using an optimized gap-fraction (GF) correction method, where GF refers to the air gaps appearing between the needles of a measured sample. We used an optically stab
Samid, Gideon
2010-01-01
Building on Shannon's lead, let's consider a more malleable expression for tracking uncertainty, and states of "knowledge available" vs. "knowledge missing," to better practice innovation, improve risk management, and successfully measure progress of intractable undertakings. Shannon's formula, and its common replacements (Renyi, Tsallis) compute to increased knowledge whenever two competing choices, however marginal, exchange probability measures. Such and other distortions are corrected by ...
Arzoumanian, E.; Paris, J. D.; Pruvost, A.; Peng, S.; Turquety, S.; Berchet, A.; Pison, I.; Helle, J.; Arshinov, M.; Belan, B. D.
2015-12-01
Methane (CH4) is the second most important anthropogenic greenhouse gas. It is also naturally emitted by a number of processes, including microbial activity in wetlands, permafrost degradation and wildfires. Our current understanding of the extent and amplitude of its natural sources, as well as the large scale driving factors, remain highly uncertain (Kirschke et al., Nature Geosci., 2013). Furthermore, high latitude regions are large natural sources of CH4 in the atmosphere. Observing boreal/Arctic CH4 variability and understanding its main driving processes using atmospheric measurements and transport model is the task of this work. YAK-AEROSIB atmospheric airborne campaigns (flights in the tropospheric layer up to 9 km connecting the two cities of Novosibirsk and Yakutsk) and continuous measurements at Fonovaya Observatory (60 km west of Tomsk - 56° 25'07"N, 84° 04'27"E) have been performed in order to provide observational data on the composition of Siberian air. The study is focused on 2012, during which a strong heat wave impacted Siberia, leading to the highest mean daily temperature values on record since the beginning of the 20th century. This abnormal drought has led to numerous large forest fires. A chemistry-transport model (CHIMERE), combined with datasets for anthropogenic (EDGAR) emissions and models for wetlands (ORCHIDEE) and wildfires (APIFLAME), is used to determine contributions of CH4 sources in the region. Recent results concerning CH4 fluxes and its atmospheric variability in the Siberian territory derived from a modeled-based analysis will be shown and discussed. This work was funded by CNRS (France), the French Ministry of Foreign Affairs, CEA (France), Presidium of RAS (Program No. 4), Brunch of Geology, Geophysics and Mining Sciences of RAS (Program No. 5), Interdisciplinary integration projects of Siberian Branch of RAS (No. 35, No. 70, No. 131), Russian Foundation for Basic Research (grants No 14-05-00526, 14-05-00590). Kirschke, S
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
Ohnishi, S.; Thornton, B.; Kamada, S.; Hirao, Y.; Ura, T.; Odano, N.
2016-05-01
Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.
Qureshi, S.; Ho, C. S.
2014-02-01
According to IEA report (2011), about 23% of the World's CO2 emissions result from transport and this is one of the few areas where emissions are still rapidly increasing. The use of private vehicles is one of the principle contributors to green house gas emissions from transport sector. Therefore this paper focuses on the shift to more sustainable and low carbon forms of transportation mode such as walking. Neighbourhood built environment attributes may influence walkability. For this study, the author used a modified version of the "Neighbourhood Environment Walkability Scale" to make comparison between respondents' perceptions regarding attributes of two neighborhoods of Putrajaya. The 21st Century really needs planners to use the Digital Earth Concept, to go from global to regional to national to very local issues, using integrated, advanced technologies such as earth observation, GIS, virtual reality, etc. For this research, two (2) neighborhoods of different densities (High and Low density) were selected. A sample total of 381(195 and 186) between 7 to 65 years old participants were selected For subjective measures we used 54 questions questionnaire survey where as for the objective measures we used desktop 9.3 version of Arc GIS soft ware. Our results shows that respondents who reside in high-walkable neighbourhood precinct 9 in Putrajaya rated factors such as residential density, land use mix, proximity to destination and street connectivity, consistently higher then did respondents of the low walkable neighbourhood precinct 8 in Putrajaya.
Hawkins, Robert C; Badrick, Tony
2015-08-01
In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.
Measurement uncertainty in anti-doping quantitative analysis for prohibited threshold substances.
Barroso, Osquel; Miller, John; Squirrell, Alan; Westwood, Steven
2012-07-01
The standards of laboratory performance of the World Anti-Doping Agency (WADA)-accredited laboratories are defined in the WADA International Standard for Laboratories and its associated Technical Documents. These sets of rules aim to harmonize the production of valid laboratory test results and evidentiary data as well as the reporting of laboratory analytical findings. The determination of anti-doping rule violations in sport made on the basis of analytical quantitative confirmatory analyses for the presence of prohibited threshold substances, in particular, requires the application of specific compliance decision rules, which are established in the WADA Technical Document on Decision Limits. In this article, the use of measurement uncertainty information in the establishment of compliance Decision Limits and in evaluating the performance of a laboratory's quantitative analytical procedures over time and in relation to other laboratories through WADA's External Quality Assessment Scheme program is reviewed and discussed. Furthermore, a perspective is provided on the emerging challenges associated with the harmonization of the quantitative measurement of large-molecular weight biomolecules.
On challenges in the uncertainty evaluation for time-dependent measurements
Eichstädt, S.; Wilkens, V.; Dienstfrey, A.; Hale, P.; Hughes, B.; Jarvis, C.
2016-08-01
The measurement of quantities with time-dependent values is a common task in many areas of metrology. Although well established techniques are available for the analysis of such measurements, serious scientific challenges remain to be solved to enable their routine use in metrology. In this paper we focus on the challenge of estimating a time-dependent measurand when the relationship between the value of the measurand and the indication is modeled by a convolution. Mathematically, deconvolution is an ill-posed inverse problem, requiring regularization to stabilize the inversion in the presence of noise. We present and discuss deconvolution in three practical applications: thrust-balance, ultra-fast sampling oscilloscopes and hydrophones. Each case study takes a different approach to modeling the convolution process and regularizing its inversion. Critically, all three examples lack the assignment of an uncertainty to the influence of the regularization on the estimation accuracy. This is a grand challenge for dynamic metrology, for which to date no generic solution exists. The case studies presented here cover a wide range of time scales and prior knowledge about the measurand, and they can thus serve as starting points for future developments in metrology. The aim of this work is to present the case studies and demonstrate the challenges they pose for metrology.
Institute of Scientific and Technical Information of China (English)
LI Tao; ZHANG JiFeng
2009-01-01
In this paper,sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order Integrator agents in a noisy distributed communication environment.The Impact of the sampling size and the number of network nodes on the system performances is analyzed.The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process,and the measurements of its neighbors'states are corrupted by random noises.By probability limit theory and the property of graph Laplacian matrix,it is shown that for a connected network,the static mean square error between the individual state and the average of the Initial states of all agents can be made arbitrarily small,provided the sampling size is sufficiently small.Furthermore,by properly choosing the consensus gains,almost sure consensus can be achieved.It is worth pointing out that an uncertainty principle of Gaussian networks is obtained,which implies that in the case of white Gausslan noises,no matter what the sampling size is,the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity,network topology and the number of network nodes.
Uncertainties of retrospective radon concentration measurements by multilayer surface trap detector
Energy Technology Data Exchange (ETDEWEB)
Bastrikov, V.; Kruzhalov, A. [Ural State Technical Univ., Yekaterinburg (Russian Federation); Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Yekaterinburg (Russian Federation)
2006-07-01
The detector for retrospective radon exposure measurements is developed. The detector consists of the multilayer package of solid-state nuclear track detectors LR-115 type. Nitrocellulose films works both as {alpha}-particle detector and as absorber decreasing the energy of {alpha}-particles. The uncertainties of implanted {sup 210}Pb measurements by two- and three-layer detectors are assessed in dependence on surface {sup 210}Po activity and gross background activity of the glass. The generalized compartment behavior model of radon decay products in the room atmosphere was developed and verified. It is shown that the most influencing parameters on the value of conversion coefficient from {sup 210}Po surface activity to average radon concentration are aerosol particles concentration, deposition velocity of unattached {sup 218}Po and air exchange rate. It is demonstrated that with the use of additional information on surface to volume room ratio, air exchange rate and aerosol particles concentration the systematic bias of conversion coefficient between surface activity of {sup 210}Po and average radon concentration can be decreased up to 30 %. (N.C.)
Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O
2015-02-21
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
Glass container special assessment of measurement uncertainty%专用玻璃量器测量不确定度的评估
Institute of Scientific and Technical Information of China (English)
谭雯
2016-01-01
文章对专用玻璃量器测量的不确定度进行评估，分析了测量过程中存在的不确定度来源，并建立了测量模型，量化不确定度分量，求出合成不确定度和扩展不确定度。%The uncertainty of special glass measuring gauge to assess, analyze the measurement process in the presence of sources of uncertainty, and the establishment of a measurement model to quantify uncertainty components obtained combined uncertainty and expanded uncertainty degree.
Institute of Scientific and Technical Information of China (English)
Luo Xiaobo; Fan Hongqi; Song Zhiyong; Fu Qiang
2013-01-01
For maritime radiation source target tracking in particular electronic counter measures (ECM) environment, there exists two main problems which can deteriorate the tracking perfor-mance of traditional approaches. The first problem is the poor observability of the radiation source. The second one is the measurement uncertainty which includes the uncertainty of the target appear-ing/disappearing and the detection uncertainty (false and missed detections). A novel approach is proposed in this paper for tracking maritime radiation source in the presence of measurement uncertainty. To solve the poor observability of maritime radiation source target, using the radiation source motion restriction, the observer altitude information is incorporated into the bearings-only tracking (BOT) method to obtain the unique target localization. Then the two uncertainties in the ECM environment are modeled by the random finite set (RFS) theory and the Bernoulli filtering method with the observer altitude is adopted to solve the tracking problem of maritime radiation source in such context. Simulation experiments verify the validity of the proposed approach for tracking maritime radiation source, and also demonstrate the superiority of the method compared with the traditional integrated probabilistic data association (IPDA) method. The tracking perfor-mance under different conditions, particularly those involving different duration of radiation source opening and switching-off, indicates that the method to solve our problem is robust and effective.
McGillen, M.; Fleming, E. L.; Jackman, C. H.; Burkholder, J. B.
2013-12-01
CFCl3 (CFC-11) is both a major ozone-depleting substance and a potent greenhouse gas that is removed primarily via stratospheric UV photolysis. Uncertainty in the temperature dependence of its UV absorption spectrum is a significant contributing factor to the overall uncertainty in its global lifetime and, thus, model calculations of stratospheric ozone recovery and climate change. In this work, the CFC-11 UV absorption spectrum was measured over a range of wavelength (184.95-230 nm) and temperature (216-296 K). We report a spectrum temperature dependence that is less than currently recommended for use in atmospheric models. The impact on its atmospheric lifetime was quantified using the NASA Goddard Space Flight Center 2-D coupled chemistry-radiation-dynamics model and the spectrum parameterization developed in this work. The modeled global annually averaged lifetime was 58.1 × 0.7 years (2σ uncertainty due solely to the spectrum uncertainty). The lifetime is slightly reduced and the uncertainty significantly reduced from that obtained using current UV spectrum recommendations. CFCl 3 (CFC-11) 2-D model results: Left: Global annually averaged loss rate coefficient (local lifetime) and photolysis and reaction contributions (see legend). Middle: Molecular loss rate and uncertainty limits; the slow and fast profiles were calculated using the 2σ uncertainty estimates in the CFC-11 UV absorption spectrum from this work. Right: CFC-11 concentration profile. CFC-11 loss process contribution to the overall local lifetime uncertainty (2σ) calculated using the 2-D model (see text). Left: Results obtained from this work. Right: Results obtained using model input from Sander et al. [2011] and updates in SPARC [2013].
Gladtke, Dieter; Bakker, Frits; Biaudet, Hugues; Brennfleck, Alexandra; Coleman, Peter; Creutznacher, Harald; Van Egmond, Ben F; Hafkenscheid, Theo; Hahne, Frank; Houtzager, Marc M; Leoz-Garziandia, Eva; Menichini, Edoardo; Olschewski, Anja; Remesch, Thomas
2012-08-01
Different collector types, sample workup procedures and analysis methods to measure the deposition of polycyclic aromatic hydrocarbons (PAH) were tested and compared. Whilst sample workup and analysis methods did not influence the results of PAH deposition measurements, using different collector types changed the measured deposition rates of PAH significantly. The results obtained with a funnel-bottle collector showed the highest deposition rates and a low measurement uncertainty. The deposition rates obtained with the wet-only collectors were the lowest at industrial sites and under dry weather conditions. For the open-jar collectors the measurement uncertainty was high. Only at an industrial site with extremely high PAH deposition rates the results of open-jar collectors were comparable to those obtained with funnel-bottle collectors. Thus, if bulk deposition of PAH has to be measured, funnel-bottle combinations are proved to be the collectors of choice. These collectors were the only ones always fulfilling the requirements of European legislation.
International Nuclear Information System (INIS)
The incidence of potassium (K) deficiency is increasing in crops, pastures, and forestry in south-western Australia. Although soil K can be measured using soil sampling and analysis, γ-ray spectrometry offers a potentially cheaper and spatially more precise alternative. This could be particularly useful in precision agriculture, where inputs are applied according to need rather than by general prescription. In a study of topsoils near Jerramungup, Western Australia, strong relationships (r2 = 0.9) were found between on-ground counts of γ-rays derived from 40K (γ-K) and both total K and plant-available K. The success of γ-ray spectrometry in predicting available K relied on a strong relationship (r2 0.9) between total K and available K which may not hold in all areas. Although the relationship between γ-K and available K held over the range of 36-1012 mg/kg, crop response to K fertilisers is only expected when the available K content is 2 = 0.9) were also found between γ-K and a range of other soil attributes, including clay, silt, and organic carbon content. These relationships depended on the locally strong relationship between total K and these soil attributes. Since such relationships do not hold everywhere, the utility of γ-ray spectrometry will likewise be limited. Site-specific calibrations are required if γ-ray spectrometry is to be used for soil property mapping. Copyright (1999) CSIRO Publishing
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
Directory of Open Access Journals (Sweden)
S. Pande
2015-04-01
Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
Willmott, Jon R.; Lowe, David; Broughton, Mick; White, Ben S.; Machin, Graham
2016-09-01
A primary temperature scale requires realising a unit in terms of its definition. For high temperature radiation thermometry in terms of the International Temperature Scale of 1990 this means extrapolating from the signal measured at the freezing temperature of gold, silver or copper using Planck’s radiation law. The difficulty in doing this means that primary scales above 1000 °C require specialist equipment and careful characterisation in order to achieve the extrapolation with sufficient accuracy. As such, maintenance of the scale at high temperatures is usually only practicable for National Metrology Institutes, and calibration laboratories have to rely on a scale calibrated against transfer standards. At lower temperatures it is practicable for an industrial calibration laboratory to have its own primary temperature scale, which reduces the number of steps between the primary scale and end user. Proposed changes to the SI that will introduce internationally accepted high temperature reference standards might make it practicable to have a primary high temperature scale in a calibration laboratory. In this study such a scale was established by calibrating radiation thermometers directly to high temperature reference standards. The possible reduction in uncertainty to an end user as a result of the reduced calibration chain was evaluated.
Szlązak, Nikodem; Korzec, Marek
2016-06-01
Methane has a bad influence on safety in underground mines as it is emitted to the air during mining works. Appropriate identification of methane hazard is essential to determining methane hazard prevention methods, ventilation systems and methane drainage systems. Methane hazard is identified while roadways are driven and boreholes are drilled. Coalbed methane content is one of the parameters which is used to assess this threat. This is a requirement according to the Decree of the Minister of Economy dated 28 June 2002 on work safety and hygiene, operation and special firefighting protection in underground mines. For this purpose a new method for determining coalbed methane content in underground coal mines has been developed. This method consists of two stages - collecting samples in a mine and testing the sample in the laboratory. The stage of determining methane content in a coal sample in a laboratory is essential. This article presents the estimation of measurement uncertainty of determining methane content in a coal sample according to this methodology.
Directory of Open Access Journals (Sweden)
José E. O. Reges
2016-07-01
Full Text Available This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1; 10.47% and 9.88% (for injection zone 2. Therefore, the methodology was successfully validated and all objectives of this work were achieved.
Reges, José E O; Salazar, A O; Maitelli, Carla W S P; Carvalho, Lucas G; Britto, Ursula J B
2016-01-01
This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068
Reges, José E O; Salazar, A O; Maitelli, Carla W S P; Carvalho, Lucas G; Britto, Ursula J B
2016-07-13
This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved.
DEFF Research Database (Denmark)
Wagner, Claas; Esbensen, Kim
2011-01-01
An augmented measurement uncertainty approach for CO2 emissions from coal-fired power plants with a focus on the often forgotten contributions from sampling errors occurring over the entire fuel-to-emission pathway is presented. Current methods for CO2 emission determination are evaluated in detail...... of these three materials were also given full attention. A systematic error (bias) is present in the current sampling approach, which increases the present uncertainty estimate unnecessarily. For both primary sampling and analytical sample extraction steps, random variations, which hitherto only have been...... considered to a minor extent, have now also been fully quantified and included in the overall uncertainty. Elimination of all identified sampling errors lead to modified CO2 determination procedures, which indicate that the actual CO2 emission is approximately 20,000 t higher than the present estimate. Based...
Energy Technology Data Exchange (ETDEWEB)
Vinai, Paolo [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Chalmers University of Technology, Goeteborg (Sweden); Macian-Juan, Rafael [Technische Universitaet Muenchen, Garching (Germany); Chawla, Rakesh [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2008-07-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at PSI, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strongly coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to NPP transient best estimate analysis has thus been demonstrated. (authors)
Hwang, Rong-Jen; Beltran, Jada; Rogers, Craig; Barlow, Jeremy; Razatos, Gerasimos
2016-09-01
Aqueous ethanol wet-bath simulator solutions are used to perform calibration adjustments, calibration checks, proficiency testing, and inspection of breath alcohol instruments. The Toxicology Bureau of the New Mexico Department of Health has conducted a study to estimate a measurement of uncertainty for the preparation and testing of these wet-bath simulator solutions. The measurand is identified as the mass concentration of ethanol (g/100 mL) determined through dual capillary column headspace gas chromatography with flame ionization detector analysis. Three groups were used in the estimation of the aqueous ethanol wet-bath simulator solutions uncertainty: GC calibration adjustment, GC analytical, and certified reference material. The standard uncertainties for these uncertainty sources were combined using the method of root-sum-squares to give uc = 0.8598%. The combined standard uncertainty was expanded to U = 1.7% to reflect a confidence level of 95% using a coverage factor of 2. This estimation applies to all aqueous ethanol wet-bath simulator solution concentrations produced by this laboratory.
International Nuclear Information System (INIS)
A desire of increasing nuclear system safety and fuel depletion is directly translated by a better knowledge on nuclear data. PROFIL and PROFIL-2 experiments give integral information on capture and (n,2n) cross sections and cumulative fission yields for several isotopes (95Mo, 97Mo, 101Pd, 105Pd, 133Cs, 143Nd, 144Nd, 145Nd, 147Sm, 149Sm, 151Eu, 233U, 234U, 235U, 238Pu, 239Pu, 240Pu, 241Pu, 242Pu, 244Cm ...). Interpretation have been done many times in the past but without experimental uncertainty estimation. The cross section library JEFF-3.1.1, the covariance data base COMAC and the code system ERANOS-2.2 are used for this updated interpretation. This study is focusing on the uncertainty estimation on experimental values sensitive to capture cross sections. Three steps are required: the fluence scaling, the uncertainty propagation on the fluence and finally the uncertainty estimation on ratio variation of interest. This work is done with CONRAD using Bayesian adjustment and marginalization method. Mean C/E results and conclusions are identical to the previous interpretation. A fluence uncertainty of 1.4% is found for the two experimental pins of PROFIL-2 and 1.9% for PROFIL. Propagating this new information on the fluence to ratio variation of interest gives experimental uncertainties between 1% to 2.5% for the isotopes present in the experimental pins. One of the main results are for 238Pu, 239Pu, 240Pu, 241Pu and 242Pu capture cross sections: C/E are respectively equal to 1.03, 0.98, 0.97, 1.08 and 1.14 with an uncertainty lower than 2.5%. All the results will provide feedback on variance-covariance matrices for further works. (author)
Dead time effect on the Brewer measurements: correction and estimated uncertainties
Fountoulakis, Ilias; Redondas, Alberto; Bais, Alkiviadis F.; José Rodriguez-Franco, Juan; Fragkos, Konstantinos; Cede, Alexander
2016-04-01
Brewer spectrophotometers are widely used instruments which perform spectral measurements of the direct, the scattered and the global solar UV irradiance. By processing these measurements a variety of secondary products can be derived such as the total columns of ozone (TOC), sulfur dioxide and nitrogen dioxide and aerosol optical properties. Estimating and limiting the uncertainties of the final products is of critical importance. High-quality data have a lot of applications and can provide accurate estimations of trends.The dead time is specific for each instrument and improper correction of the raw data for its effect may lead to important errors in the final products. The dead time value may change with time and, with the currently used methodology, it cannot always be determined accurately. For specific cases, such as for low ozone slant columns and high intensities of the direct solar irradiance, the error in the retrieved TOC, due to a 10 ns change in the dead time from its value in use, is found to be up to 5 %. The error in the calculation of UV irradiance can be as high as 12 % near the maximum operational limit of light intensities. While in the existing documentation it is indicated that the dead time effects are important when the error in the used value is greater than 2 ns, we found that for single-monochromator Brewers a 2 ns error in the dead time may lead to errors above the limit of 1 % in the calculation of TOC; thus the tolerance limit should be lowered. A new routine for the determination of the dead time from direct solar irradiance measurements has been created and tested and a validation of the operational algorithm has been performed. Additionally, new methods for the estimation and the validation of the dead time have been developed and are analytically described. Therefore, the present study, in addition to highlighting the importance of the dead time for the processing of Brewer data sets, also provides useful information for their
Quantifying Urban Natural Gas Leaks from Street-level Methane Mapping: Measurements and Uncertainty
von Fischer, J. C.; Ham, J. M.; Griebenow, C.; Schumacher, R. S.; Salo, J.
2013-12-01
Leaks from the natural gas pipeline system are a significant source of anthropogenic methane in urban settings. Detecting and repairing these leaks will reduce the energy and carbon footprints of our cities. Gas leaks can be detected from spikes in street-level methane concentrations measured by analyzers deployed on vehicles. While a spike in methane concentration indicates a leak, an algorithm (e.g., inverse model) must be used to estimate the size of the leak (i.e., flux) from concentration data and supporting meteorological information. Unfortunately, this drive-by approach to leak quantification is confounded by the complexity of urban roughness, changing weather conditions, and other incidental factors (e.g., traffic, vehicle speed, etc.). Furthermore, the vehicle might only pass through the plume one to three times during routine mapping. The objective of this study was to conduct controlled release experiments to better quantify the relationship between mobile methane concentration measurements and the size and location of the emission source (e.g., pipeline leakage) in an urban environment. A portable system was developed that could release methane at known rates between 10 and 40 LPM while maintaining concentrations below the lower explosive limit. A mapping vehicle was configured with fast response methane analyzers, GPS, and meteorological instruments. Portable air-sampling tripods were fabricated that could be deployed at defined distances downwind from the release point and automatically-triggered to collect grab samples. The experimental protocol was as follows: (1) identify an appropriate release point within a city, (2) release methane at a known rate, (3) measure downwind street-level concentrations with the vehicle by making multiple passes through the plume, and (4) collect supporting concentration and meteorological data with the static tripod samplers deployed in the plume. Controlled release studies were performed at multiple locations and
Dupont, R.; Pierce, B.; Worden, J.; Hair, J.; Fenn, M.; Hamer, P.; Natarajan, M.; Schaack, T.; Lenzen, A.; Apel, E.; Dibb, J.; Diskin, G.; Huey, G.; Weinheimer, A.; Kondo, Y.; Knapp, D.
2012-01-01
We use ozone and carbon monoxide measurements from the Tropospheric Emission Spectrometer (TES), model estimates of Ozone, CO, and ozone pre-cursors from the Real-time Air Quality Modeling System (RAQMS), and data from the NASA DC8 aircraft to characterize the source and dynamical evolution of ozone and CO in Asian wildfire plumes during the spring ARCTAS campaign 2008. On the 19 April, NASA DC8 O3 and aerosol Differential Absorption Lidar (DIAL) observed two biomass burning plumes originating from North-Western Asia (Kazakhstan) and South-Eastern Asia (Thailand) that advected eastward over the Pacific reaching North America in 10 to 12 days. Using both TES observations and RAQMS chemical analyses, we track the wildfire plumes from their source to the ARCTAS DC8 platform. In addition to photochemical production due to ozone pre-cursors, we find that exchange between the stratosphere and the troposphere is a major factor influencing O3 concentrations for both plumes. For example, the Kazakhstan and Siberian plumes at 55 degrees North is a region of significant springtime stratospheric/tropospheric exchange. Stratospheric air influences the Thailand plume after it is lofted to high altitudes via the Himalayas. Using comparisons of the model to the aircraft and satellite measurements, we estimate that the Kazakhstan plume is responsible for increases of O3 and CO mixing ratios by approximately 6.4 ppbv and 38 ppbv in the lower troposphere (height of 2 to 6 km), and the Thailand plume is responsible for increases of O3 and CO mixing ratios of approximately 11 ppbv and 71 ppbv in the upper troposphere (height of 8 to 12 km) respectively. However, there are significant sources of uncertainty in these estimates that point to the need for future improvements in both model and satellite observations. For example, it is challenging to characterize the fraction of air parcels from the stratosphere versus those from the fire because of the low sensitivity of the TES CO
Directory of Open Access Journals (Sweden)
R. Dupont
2012-01-01
Full Text Available We use ozone and carbon monoxide measurements from the Tropospheric Emission Spectrometer (TES, model estimates of Ozone, CO, and ozone pre-cursors from the Real-time Air Quality Modeling System (RAQMS, and data from the NASA DC8 aircraft to characterize the source and dynamical evolution of ozone and CO in Asian wildfire plumes during the spring ARCTAS campaign 2008. On the 19 April, NASA DC8 O_{3} and aerosol Differential Absorption Lidar (DIAL observed two biomass burning plumes originating from North-Western Asia (Kazakhstan and South-Eastern Asia (Thailand that advected eastward over the Pacific reaching North America in 10 to 12 days. Using both TES observations and RAQMS chemical analyses, we track the wildfire plumes from their source to the ARCTAS DC8 platform. In addition to photochemical production due to ozone pre-cursors, we find that exchange between the stratosphere and the troposphere is a major factor influencing O_{3} concentrations for both plumes. For example, the Kazakhstan and Siberian plumes at 55 degrees North is a region of significant springtime stratospheric/tropospheric exchange. Stratospheric air influences the Thailand plume after it is lofted to high altitudes via the Himalayas. Using comparisons of the model to the aircraft and satellite measurements, we estimate that the Kazakhstan plume is responsible for increases of O_{3} and CO mixing ratios by approximately 6.4 ppbv and 38 ppbv in the lower troposphere (height of 2 to 6 km, and the Thailand plume is responsible for increases of O_{3} and CO mixing ratios of approximately 11 ppbv and 71 ppbv in the upper troposphere (height of 8 to 12 km respectively. However, there are significant sources of uncertainty in these estimates that point to the need for future improvements in both model and satellite observations. For example, it is challenging to characterize the fraction of air parcels from the stratosphere versus those from the
Cowan, Nicholas; Levy, Peter; Skiba, Ute
2016-04-01
The addition of reactive nitrogen to agricultural soils in the form of artificial fertilisers or animal waste is the largest global source of anthropogenic N2O emissions. Emission factors are commonly used to evaluate N2O emissions released after the application of nitrogen fertilisers on a global scale based on records of fertiliser use. Currently these emission factors are estimated primarily by a combination of results of experiments in which flux chamber methodology is used to estimate annual cumulative fluxes of N2O after nitrogen fertiliser applications on agricultural soils. The use of the eddy covariance method to measure N2O and estimate emission factors is also becoming more common in the flux community as modern rapid gas analyser instruments advance. The aim of the presentation is to highlight the weaknesses and potential systematic biases in current flux measurement methodology. This is important for GHG accounting and for accurate model calibration and verification. The growing interest in top-down / bottom-up comparisons of tall tower and conventional N2O flux measurements is also an area of research in which the uncertainties in flux measurements needs to be properly quantified. The large and unpredictable spatial and temporal variability of N2O fluxes from agricultural soils leads to a significant source of uncertainty in emission factor estimates. N2O flux measurements typically show poor relationships with explanatory co-variates. The true uncertainties in flux measurements at the plot scale are often difficult to propagate to field scale and the annual time scale. This results in very uncertain cumulative flux (emission factor) estimates. Cumulative fluxes estimated using flux chamber and eddy covariance methods can also differ significantly which complicates the matter further. In this presentation, we examine some effects that spatial and temporal variability of N2O fluxes can have on the estimation of emission factors and describe how
Salmi, Tiina; Marchevsky, Maxim; Bajas, Hugo; Felice, Helene; Stenvall, Antti
2015-01-01
The quench protection of superconducting high-field accelerator magnets is presently based on protection heaters, which are activated upon quench detection to accelerate the quench propagation within the winding. Estimations of the heater delay to initiate a normal zone in the coil are essential for the protection design. During the development of Nb3Sn magnets for the LHC luminosity upgrade, protection heater delays have been measured in several experiments, and a new computational tool CoHDA (Code for Heater Delay Analysis) has been developed for heater design. Several computational quench analyses suggest that the efficiency of the present heater technology is on the borderline of protecting the magnets. Quantifying the inevitable uncertainties related to the measured and simulated delays is therefore of pivotal importance. In this paper, we analyze the uncertainties in the heater delay measurements and simulations using data from five impregnated high-field Nb3Sn magnets with different heater geometries. ...
Gruber, Matthew; Hartogensis, Oscar
2014-01-01
Displaced-beam scintillometer measurements of the turbulence inner-scale length $l_o$ and refractive index structure function $C_n^2$ resolve area-average turbulent fluxes of heat and momentum through the Monin-Obukhov similarity equations. Sensitivity studies have been produced for the use of displaced-beam scintillometers over flat terrain. Many real field sites feature variable topography. We develop here an analysis of the sensitivity of displaced-beam scintillometer derived sensible heat fluxes to uncertainties in spacially distributed topographic measurements. Sensitivity is shown to be concentrated in areas near the center of the beam and where the underlying topography is closest to the beam height. Uncertainty may be decreased by taking precise topographic measurements in these areas.
Directory of Open Access Journals (Sweden)
M. A. Gruber
2014-01-01
Full Text Available Scintillometer measurements allow for estimations of the refractive index structure parameter Cn2 over large areas in the atmospheric surface layer. Turbulent fluxes of heat and momentum are inferred through coupled sets of equations derived from the Monin–Obukhov similarity hypothesis. One-dimensional sensitivity functions have been produced that relate the sensitivity of heat fluxes to uncertainties in single values of beam height over homogeneous and flat terrain. However, real field sites include variable topography and heterogeneous surfaces. We develop here the first analysis of the sensitivity of scintillometer derived sensible heat fluxes to uncertainties in spatially distributed topographic measurements. For large-aperture scintillometers and independent friction velocity u* measurements, sensitivity is shown to be concentrated in areas near the center of the beam path and where the underlying topography is closest to the beam height. Uncertainty may be greatly reduced by focusing precise topographic measurements in these areas. A new two-dimensional variable terrain sensitivity function is developed for quantitative error analysis. This function is compared with the previous one-dimensional sensitivity function for the same measurement strategy over flat and homogeneous terrain. Additionally, a new method of solution to the set of coupled equations is produced that eliminates computational error. The results are produced using a new methodology for error analysis involving distributed parameters that may be applied in other disciplines.
Espinoza, Penelope P.; Quezada, Stephanie A.; Rincones, Rodolfo; Strobach, E. Natalia; Gutierrez, Maria Armida Estrada
2012-01-01
The present work investigates the validation of a newly developed instrument, the attributional bias instrument, based on achievement attribution theories that distinguish between effort and ability explanations of behavior. The instrument further incorporates the distinction between explanations for success versus failure in academic performance.…
M. Boumans
2013-01-01
This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation.
Measures of Model Uncertainty in the Assessment of Primary Stresses in Ship Structures
DEFF Research Database (Denmark)
Östergaard, Carsten; Dogliani, Mario; Guedes Soares, Carlos;
1996-01-01
The paper considers various models and methods commonly used for linear elastic stress analysis and assesses the uncertainty involved in their application to the analysis of the distribution of primary stresses in the hull of a containership example, through statistical evaluations of the results...
Pelt, van S.C.; Avelar, D.; Swart, R.J.
2010-01-01
This policy brief is directed towards funders and managers of climate change impacts and adaptation research programmes as well as policy makers in this area. It notes various challenges in addressing uncertainties in climate change research and policy and provides suggestions on how to address them
Damage assessment of composite plate structures with material and measurement uncertainty
Chandrashekhar, M.; Ganguli, Ranjan
2016-06-01
Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problem in damage assessment. A recently developed C0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data.
Directory of Open Access Journals (Sweden)
Мaryna O. Golofeyeva
2015-12-01
Full Text Available The effective use of heterogeneous non-metallic materials and structures needs measurement of reliable values of dissipation characteristics, as well as common factors of their change during the loading process. Aim: The aim of this study is to prepare the budget for measurement uncertainty of dissipative properties of composite materials. Materials and Methods: The method used to study the vibrational energy dissipation characteristics based on coupling of vibrations damping decrement and acoustic velocity in a non-metallic heterogeneous material is reviewed. The proposed method allows finding the dependence of damping on vibrations amplitude and frequency of strain-stress state of material. Results: Research of the accuracy of measurement method during the definition of decrement attenuation of fluctuations in synthegran was performed. The international approach for evaluation of measurements quality is used. It includes the common practice international rules for uncertainty expression and their summation. These rules are used as internationally acknowledged confidence measure to the measurement results, which includes testing. The uncertainties budgeting of acoustic method for measurement of dissipative properties of materials were compiled. Conclusions: It was defined that there are two groups of reasons resulting in errors during measurement of materials dissipative properties. The first group of errors contains of parameters changing of calibrated bump in tolerance limits, displacement of sensor in repeated placement to measurement point, layer thickness variation of contact agent because of irregular hold-down of resolvers to control surface, inaccuracy in reading and etc. The second group of errors is linked with density and Poisson’s ratio measurement errors, distance between sensors, time difference between signals of vibroacoustic sensors.
Meskhidze, Nicholas; Johnson, Matthew S.; Hurley, David; Dawson, Kyle
2016-09-01
The atmospheric supply of mineral dust iron (Fe) plays a crucial role in the Earth's biogeochemical cycle and is of specific importance as a micronutrient in the marine environment. Observations show several orders of magnitude variability in the fractional solubility of Fe in mineral dust aerosols, making it hard to assess the role of mineral dust in the global ocean biogeochemical Fe cycle. In this study we compare the operational solubility of mineral dust aerosol Fe associated with the flow-through leaching protocol to the results of the global 3-D chemical transport model GEOS-Chem. According to the protocol, aerosol Fe is defined as soluble by first deionized water leaching of mineral dust through a 0.45 μm pore size membrane followed by acidification and storage of the leachate over a long period of time prior to analysis. To estimate the uncertainty in soluble Fe results introduced by the flow-through leaching protocol, we prescribe an average 50% (range of 30-70%) fractional solubility to sub-0.45 μm sized mineral dust particles that may inadvertently pass the filter and end up in the acidified (at pH ∼ 1.7) leachate for a couple of month period. In the model, the fractional solubility of Fe is either explicitly calculated using a complex mineral aerosol Fe dissolution equations, or prescribed to be 1% and 4% often used by global ocean biogeochemical Fe cycle models to reproduce the broad characteristics of the presently observed ocean dissolved iron distribution. Calculations show that the fractional solubility of Fe derived through the flow-through leaching is higher compared to the model results. The largest differences (∼40%) are predicted to occur farther away from the dust source regions, over the areas where sub-0.45 μm sized mineral dust particles contribute a larger fraction of the total mineral dust mass. This study suggests that different methods used in soluble Fe measurements and inconsistences in the operational definition of
Directory of Open Access Journals (Sweden)
G. Bernhard
2011-12-01
Full Text Available Spectral ultraviolet (UV irradiance has been observed near Barrow, Alaska (71° N, 157° W between 1991 and 2011 with an SUV-100 spectroradiometer. The instrument was historically part of the US National Science Foundation's UV Monitoring Network and is now a component of NSF's Arctic Observing Network. From these measurements, trends in monthly average irradiance and their uncertainties were calculated. The analysis focuses on two quantities, the UV Index (which is affected by atmospheric ozone concentrations and irradiance at 345 nm (which is virtually insensitive to ozone. Uncertainties of trend estimates depend on variations in the data due to (1 natural variability, (2 systematic and random errors of the measurements, and (3 uncertainties caused by gaps in the time series. Using radiative transfer model calculations, systematic errors of the measurements were detected and corrected. Different correction schemes were tested to quantify the sensitivity of the trend estimates on the treatment of systematic errors. Depending on the correction method, estimates of decadal trends changed between 1.5% and 2.9%. Uncertainties in the trend estimates caused by error sources (2 and (3 were set into relation with the overall uncertainty of the trend determinations. Results show that these error sources are only relevant for February, March, and April when natural variability is low due to high surface albedo. This method of addressing measurement uncertainties in time series analysis is also applicable to other geophysical parameters. Trend estimates varied between −14% and +5% per decade and were significant (95.45% confidence level only for the month of October. Depending on the correction method, October trends varied between −11.4% and −13.7% for irradiance at 345 nm and between −11.7% and −14.1% for the UV Index. These large trends are consistent with trends in short-wave (0.3–3.0 μm solar irradiance measured with pyranometers at NOAA
Directory of Open Access Journals (Sweden)
Huani Qin
2014-01-01
Full Text Available In the rough fuzzy set theory, the rough degree is used to characterize the uncertainty of a fuzzy set, and the rough entropy of a knowledge is used to depict the roughness of a rough classification. Both of them are effective, but they are not accurate enough. In this paper, we propose a new rough entropy of a rough fuzzy set combining the rough degree with the rough entropy of a knowledge. Theoretical studies and examples show that the new rough entropy of a rough fuzzy set is suitable. As an application, we introduce it into a fuzzy-target decision-making table and establish a new method for evaluating the entropy weight of attributes.
International Nuclear Information System (INIS)
Milk is known to contain organohalogen compounds. A mixture of hexane and isopropanol was used to extract lipids from bovine milk and neutron activation analysis (NAA) was employed to measure extractable organohalogens in the lipids. The samples were irradiated in a neutron flux of 2.5 × 1011 cm2 s-1 for 10 min, allowed to decay for 2 min, and counted for 10 min. Uncertainties associated with the preconcentration NAA measurements were investigated in detail. The mass fractions of halogens in mg kg-1 and their relative expanded uncertainties in percent in bovine milk lipids were: 32 (8.4 %), 2.65 (9.8 %) and 0.211 (6.6 %) for Cl, Br and I, respectively. (author)
Energy Technology Data Exchange (ETDEWEB)
Ramanjaneyulu, P.S. [Radioanalytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sayi, Y.S. [Radioanalytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)], E-mail: yssayi@barc.gov.in; Ramakumar, K.L. [Radioanalytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)], E-mail: klram@barc.gov.in
2008-08-31
Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H{sub 2}O{sub 2}, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1{sigma} level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.
International Nuclear Information System (INIS)
Chromium is one of the important elements that provide desirable strength to different grades of steels which are chosen as structural materials for upcoming fast breeder reactors. Therefore its estimation is an important part of qualification of steels for desired applications. Several methods have been cited in literature for the estimation of chromium in steels which include most sophisticated instruments like XRFS, spark based OES, UV-Visible spectrophotometer and also classical volumetric titration. Being surface based techniques, both XRFS and spark OES have their own limitations of using matrix matching standards apart from usage of high cost instrumentation. Similarly, volumetric method being time consuming one and also the method cited in involves cumbersome chemical treatment to convert entire chromium in to measurable form of Cr (VI) and subsequent measurement by UV-Visible Spectrophotometer at 350 nm or 373 nm. As this method involves time consuming sample preparation step, it is also not a preferred method for an industrial laboratory where high analytical loads normally exists and quick analytical feedback is an issue. In view of limitations in the method cited above, an attempt has been made to develop a simple and direct method for estimation of chromium in different grades of steels containing chromium in the range of 4.75%-26%. Further, present paper also evaluates the measurement uncertainty (MU) in measurement of chromium in different grades of steels. The developed method involves the dissolution of steel in aqua-regia followed by perchloric acid fuming to convert total chromium to Cr (VI) and subsequent measurement at 447 nm after adding phosphoric acid to the suitable aliquot taken from stock solution. Phosphoric acid is added to mask iron present in solution. For the purpose to quantify measurement uncertainty, the methodology as given in EURACHEM/CITAC guide CG-4 has been followed. The expanded uncertainty at 95% confidence limit is
测最不确定度的评定中的蒙特卡罗方法%Uncertainty Evaluation in Measurement of Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
陈雅
2012-01-01
该文介绍了蒙特卡罗法以及不确定度问题，当采用不确定度传递律进行测量不确定度评定（GUM方法）有困难或不方便时，蒙特卡罗法是实用的替代方法。%The Monte Carlo method and the question of measurement uncertainty are given ,When it is difficult to apply the GUM uncertainty framework that uses the law of propagation of uncertainty to evaluate uncertainty in measurement, the Monte Carlo Method（MCM）is a practical alternative.
Uncertainties in façade fire tests – measurements and modeling
Directory of Open Access Journals (Sweden)
Anderson Johan
2016-01-01
Full Text Available In this paper a comparison between test and modelling results are performed for two large-scale façade fire testing methods, namely SP Fire 105 and BS 8414-1. In order to be able to compare tests and modelling the uncertainties have to be quantified both in the test and the modelling. Here we present a methodology based on deterministic sampling to quantify uncertainties in the modelling input. We find, in general good agreement between the models and the test results. Moreover, temperatures estimated by plate thermometers is indicated to be less sensitive to small variations in model input and is thus suitable for these kind of comparisons.
Energy Technology Data Exchange (ETDEWEB)
Teles, Francisco A.S.; Santos, Ebenezer F.; Dantas, Carlos C., E-mail: francisco.teles@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociencias. Departamento de Energia Nuclear; Melo, Silvio B., E-mail: sbm@cin.ufpe.br [Universidade Federal de Pernambuco (CIN/UFPE), Recife, PE (Brazil). Centro de Informatica; Santos, Valdemir A. dos, E-mail: vas@unicap.br [Universidade Catolica de Pernambuco (UNICAP), Recife, PE (Brazil). Dept. de Quimica; Lima, Emerson A.O., E-mail: emathematics@gmail.com [Universidade de Pernambuco (POLI/UPE), Recife, PE (Brazil). Escola Politecnica
2013-07-01
In this paper, fluid dynamics of Fluid Catalytic Cracking (FCC) process is investigated by means of a Cold Flow Pilot Unit (CFPU) constructed in Plexiglas to visualize operational conditions. Axial and radial catalyst profiles were measured by gamma ray transmission in the riser of the CFPU. Standard uncertainty was evaluated in volumetric solid fraction measurements for several concentrations at a given point of axial profile. Monitoring of the pressure drop in riser shows a good agreement with measured standard uncertainty data. A further evaluation of the combined uncertainty was applied to volumetric solid fraction equation using gamma transmission data. Limit condition of catalyst concentration in riser was defined and simulation with random numbers provided by MATLAB software has tested uncertainty evaluation. The Guide to the expression of Uncertainty in Measurement (GUM) is based on the law of propagation of uncertainty and on the characterization of the quantities measured by means of either a Gaussian distribution or a t-distribution, which allows measurement uncertainty to be delimited by means of a confidence interval. A variety of supplements to GUM are being developed, which will progressively enter into effect. The first of these supplements [3] describes an alternative procedure for the calculation of uncertainties: the Monte Carlo Method (MCM).MCM is an alternative to GUM, since it performs a characterization of the quantities measured based on the random sampling of the probability distribution functions. This paper also explains the basic implementation of the MCM method in MATLAB. (author)
International Nuclear Information System (INIS)
Intravenous drug delivery is a standard practice in hospitalized patients. As the blood concentration reached depends directly on infusion rate, it is important to use safe devices that guarantee output accuracy. In pediatric intensive care units, low infusion rates (i.e. lower than 10.0 ml/h) are frequently used. Thus, it would be necessary to use control programs to search for deviations at this flow range. We describe the implementation of a gravimetric method to test infusion pumps in low flow delivery. The procedure recommended by the ISO/IEC 60601-2-24 standard was used being a reasonable option among the methods frequently used in hospitals, such as infusion pumps analyzers and volumetric cylinders. The main uncertainty sources affecting this method are revised and a numeric and graphic uncertainty analysis is presented in order to show its dependence on flow. Additionally, the obtained uncertainties are compared to those presented by an automatic flow analyzer. Finally, the results of a series of tests performed on a syringe infusion pump operating at low rates are shown.
International Nuclear Information System (INIS)
The main objectives of this research thesis are the management and reduction of uncertainties associated with measurements performed by means of a fission-chamber type sensor. The author first recalls the role of experimental reactors in nuclear research, presents the various sensors used in nuclear detection (photographic film, scintillation sensor, gas ionization sensor, semiconducting sensor, other types of radiation sensors), and more particularly addresses neutron detection (activation sensor, gas filling sensor). In a second part, the author gives an overview of the state of the art of neutron measurement by fission chamber in a mock-up reactor (signal formation, processing and post-processing, associated measurements and uncertainties, return on experience of measurements by fission chamber on Masurca and Minerve research reactors). In a third part, he reports the optimization of two intrinsic parameters of this sensor: the thickness of fissile material deposit, and the pressure and nature of the filler gas. The fourth part addresses the improvement of measurement electronics and of post-processing methods which are used for result analysis. The fifth part deals with the optimization of spectrum index measurements by means of a fission chamber. The impact of each parameter is quantified. Results explain some inconsistencies noticed in measurements performed on the Minerve reactor in 2004, and allow the improvement of biases with computed values
Kim, Jae-Min; Choi, Seung-Hyun; Shin, Gi-Hae; Lee, Jin-Ha; Kang, Seong-Ran; Lee, Kyun-Young; Lim, Ho-Soo; Kang, Tae Seok; Lee, Ok-Hwan
2016-12-15
This study investigated a method for the validation and determination of measurement uncertainty for the simultaneous determination of synthetic phenolic antioxidants (SPAs) such as propyl gallate (PG), octyl gallate (OG), dodecyl gallate (DG), 2,4,5-trihydroxy butyrophenone (THBP), tert-butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA), and butylated hydroxytoluene (BHT) in edible oils commonly consumed in Korea. The validated method was able to extract SPA residues under the optimized HPLC-UV and LC-MS/MS conditions. Furthermore, the measurement of uncertainty was evaluated based on the precision study. For HPLC-UV analysis, the recoveries of SPAs ranged from 91.4% to 115.9% with relative standard deviations between 0.3% and 11.4%. In addition, the expanded uncertainties of the SPAs ranged from 0.15 to 5.91. These results indicate that the validated method is appropriate for the extraction and determination of SPAs and can be used to verify the safety of edible oil products containing SPAs residues. PMID:27451150
Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.
2016-03-01
In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3
Hassler, B.; Petropavlovskikh, I; J. Staehelin; August, T.; Bhartia, P. K.; Clerbaux, Cathy; Degenstein, D.; De Mazière, M.; DINELLI, B. M.; Dudhia, A.; Dufour, G.; Frith, S. M.; Froidevaux, L.; Godin-Beekmann, Sophie; Granville, J.
2014-01-01
Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of ...
Hassler, B.; Petropavlovskikh, I; J. Staehelin; August, T.; Bhartia, P. K.; Clerbaux, C.; Degenstein, D.; De Mazière, M.; DINELLI, B. M.; Dudhia, A.; Dufour, G.; Frith, S. M.; Froidevaux, L.; S. Godin-Beekmann; Granville, J.
2014-01-01
Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone ch...
L. Leydesdorff; P. Zhou
2014-01-01
Using the possible synergy among geographic, size, and technological distributions of firms in the Orbis database, we find the greatest reduction of uncertainty at the level of the 31 provinces of China, and an additional 18.0 % at the national level. Some of the coastal provinces stand out as expec
On the evaluation of a fuel assembly design by means of uncertainty and sensitivity measures
International Nuclear Information System (INIS)
This paper will provide results of an uncertainty and sensitivity study in order to calculate parameters of safety related importance like the fuel centerline temperature, the cladding temperature and the fuel assembly pressure drop of a lead-alloy cooled fast system. Applying best practice guidelines, a list of uncertain parameters has been identified. The considered parameter variations are based on the experience gained during fabrication and operation of former and existing liquid metal cooled fast systems as well as on experimental results and on engineering judgment. (orig.)
International Nuclear Information System (INIS)
Effectiveness of an ultrasound treatment shall be assessed by experiments. A reliable cell culture protocol is available and spatial discrepancies could arise. To assure if the spatial difference are relevant or not, and how they should be dealt with, an uncertainty model for the treatment result is a metrological reliable solution. The present work reports a metrological approach to assess myotube thickness and to validate a primary cell culture of muscle after a therapeutic ultrasound treatment, comparing it with a control group. The results reinforced the importance of such approach and show an efficacy of treatment on myotube differentiation
Directory of Open Access Journals (Sweden)
B. Sarangi
2015-12-01
Full Text Available In this work, we have used scanning mobility particle sizer (SMPS and quartz crystal microbalance (QCM to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS, ammonium nitrate (AN and sodium chloride (SC, and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA, where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff with uncertainty of inorganic salt particles (for particle count mean diameter (CMD over a size range 10 to 478 nm, i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm−3, which are comparable with the material density (ρ values, 1.77, 2.17 and 1.72 g cm−3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2 in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm−3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an
Bertrand-Krajewski, J L
2004-01-01
In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.
Adams, R.; Costelloe, J. F.; Western, A. W.; George, B.
2013-10-01
An improved understanding of water balances of rivers is fundamental in water resource management. Effective use of a water balance approach requires thorough identification of sources of uncertainty around all terms in the analysis and can benefit from additional, independent information that can be used to interpret the accuracy of the residual term of a water balance. We use a Monte Carlo approach to estimate a longitudinal river channel water balance and to identify its sources of uncertainty for a regulated river in south-eastern Australia, assuming that the residual term of this water balance represents fluxes between groundwater and the river. Additional information from short term monitoring of ungauged tributaries and groundwater heads is used to further test our confidence in the estimates of error and variance for the major components of this water balance. We identify the following conclusions from the water balance analysis. First, improved identification of the major sources of error in consecutive reaches of a catchment can be used to support monitoring infrastructure design to best reduce the largest sources of error in a water balance. Second, estimation of ungauged inflow using rainfall-runoff modelling is sensitive to the representativeness of available gauged data in characterising the flow regime of sub-catchments along a perennial to intermittent continuum. Lastly, comparison of temporal variability of stream-groundwater head difference data and a residual water balance term provides an independent means of assessing the assumption that the residual term represents net stream-groundwater fluxes.
电子计重秤测量结果的不确定度评定%Electronic weight scale evaluation of uncertainty measurement results
Institute of Scientific and Technical Information of China (English)
刘海华
2012-01-01
This paper mainly introduces the sources uncertainty in the electronic weight scale test, calculation of each component of the standard uncertainty, and combined standard uncertainty and expanded uncert