Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)
2016-10-15
It is well known that stress corrosion cracking (SCC) is one of the main material-related issues in operating nuclear reactors. To predict the initiation time of SCC, the Weibull distribution is widely used as a statistical model representing SCC reliability. The typical experimental procedure of an SCC initiation test involves an interval-censored cracking test with several specimens. From the result of the test, the experimenters can estimate the parameters of Weibull distribution by maximum likelihood estimation (MLE) or median rank regression (MRR). However, in order to obtain the sufficient accuracy of the Weibull estimators, it is hard for experimenters to determine the proper number of test specimens and censoring intervals. Therefore, in this work, the effects of some experimental conditions on estimation uncertainties of Weibull distribution were studied through the Monte Carlo simulation. The main goal of this work is to suggest quantitative estimation uncertainties for experimenters who want to develop probabilistic SCC initiation model by a cracking test. Widely used MRR and MLE are considered as estimation methods of Weibull distribution. By using a Monte Carlo simulation, uncertainties of MRR and ML estimators were quantified in various experimental cases. And we compared the uncertainties between the TDCI and TICI cases.
Directory of Open Access Journals (Sweden)
Patel Kamlesh
2015-01-01
Full Text Available In this paper, the effects of the input quantity representations in linear and complex forms are analyzed to estimate mismatch uncertainty separately for one-port and two-port components. The mismatch uncertainties in power and attenuation measurements are evaluated for direct, ratio and substitution techniques with the use of a vector network analyzer system in the range of 1 to 18 GHz. The estimated mismatch uncertainties were compared for the same device under test and these values have verified that their evaluation is dependent on the representations of input quantities. In power measurements, the mismatch uncertainty is reduced when evaluating from the voltage standing wave ratio or reflection coefficient magnitudes in comparison to the complex reflection coefficients. The mismatch uncertainty in the attenuation measurements, are found higher and linearly increasing while estimating from the linear magnitude values than those from the S-parameters of the attenuator. Thus in practice, the mismatch uncertainty is estimated more accurately using the quantities measured in the same representations as of measuring quantity.
Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach
Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel
2014-05-01
Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by
Experimental Joint Quantum Measurements with Minimum Uncertainty
Ringbauer, Martin; Biggerstaff, Devon N.; Broome, Matthew A.; Fedrizzi, Alessandro; Branciard, Cyril; White, Andrew G.
2014-01-01
Quantum physics constrains the accuracy of joint measurements of incompatible observables. Here we test tight measurement-uncertainty relations using single photons. We implement two independent, idealized uncertainty-estimation methods, the three-state method and the weak-measurement method, and adapt them to realistic experimental conditions. Exceptional quantum state fidelities of up to 0.999 98(6) allow us to verge upon the fundamental limits of measurement uncertainty.
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
Estimating uncertainties in complex joint inverse problems
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related
Statistical approach for uncertainty quantification of experimental modal model parameters
DEFF Research Database (Denmark)
Luczak, M.; Peeters, B.; Kahsin, M.
2014-01-01
. This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...... estimates obtained from vibration experiments. Modal testing results are influenced by numerous factors introducing uncertainty to the measurement results. Different experimental techniques applied to the same test item or testing numerous nominally identical specimens yields different test results...
Uncertainty estimation by convolution using spatial statistics.
Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio
2006-10-01
Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Estimation of Modal Parameters and their Uncertainties
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...... is given showing how the uncertainty estimates can be used in applications such as damage detection....
Estimating the uncertainty in underresolved nonlinear dynamics
Energy Technology Data Exchange (ETDEWEB)
Chorin, Alelxandre; Hald, Ole
2013-06-12
The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
and property prediction models. While use of experimentally measured values for the needed properties is desirable in process design, the experimental data for the compounds of interest may not be available in many cases. Therefore, development of efficient and reliable property prediction methods and tools...... the results of uncertainty analysis to predict the uncertainties in process design. For parameter estimation, large data-sets of experimentally measured property values for a wide range of pure compounds are taken from the CAPEC database. Classical frequentist approach i.e., least square method is adopted......, critical temperature, acentric factor etc. In such cases, accurate property values along with uncertainty estimates are needed to perform sensitivity analysis and quantify the effects of these uncertainties on the process design. The objective of this work is to develop a systematic methodology to provide...
Estimating uncertainty in resolution tests
CSIR Research Space (South Africa)
Goncalves, DP
2006-05-01
Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...
Estimating uncertainty in map intersections
Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker
2009-01-01
Traditionally, natural resource managers have asked the question "How much?" and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question "Where?" and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases...
The experimental uncertainty of heterogeneous public K(i) data.
Kramer, Christian; Kalliokoski, Tuomo; Gedeck, Peter; Vulpetti, Anna
2012-06-14
The maximum achievable accuracy of in silico models depends on the quality of the experimental data. Consequently, experimental uncertainty defines a natural upper limit to the predictive performance possible. Models that yield errors smaller than the experimental uncertainty are necessarily overtrained. A reliable estimate of the experimental uncertainty is therefore of high importance to all originators and users of in silico models. The data deposited in ChEMBL was analyzed for reproducibility, i.e., the experimental uncertainty of independent measurements. Careful filtering of the data was required because ChEMBL contains unit-transcription errors, undifferentiated stereoisomers, and repeated citations of single measurements (90% of all pairs). The experimental uncertainty is estimated to yield a mean error of 0.44 pK(i) units, a standard deviation of 0.54 pK(i) units, and a median error of 0.34 pK(i) units. The maximum possible squared Pearson correlation coefficient (R(2)) on large data sets is estimated to be 0.81.
Parameter Uncertainty in Exponential Family Tail Estimation
Landsman, Z.; Tsanakas, A.
2012-01-01
Actuaries are often faced with the task of estimating tails of loss distributions from just a few observations. Thus estimates of tail probabilities (reinsurance prices) and percentiles (solvency capital requirements) are typically subject to substantial parameter uncertainty. We study the bias and MSE of estimators of tail probabilities and percentiles, with focus on 1-parameter exponential families. Using asymptotic arguments it is shown that tail estimates are subject to significant positi...
Uncertainty Analysis in the Noise Parameters Estimation
Directory of Open Access Journals (Sweden)
Pawlik P.
2012-07-01
Full Text Available The new approach to the uncertainty estimation in modelling acoustic hazards by means of the interval arithmetic is presented in the paper. In the case of the noise parameters estimation the selection of parameters specifying the acoustic wave propagation in an open space as well as parameters which are required in a form of average values – often constitutes a difficult problem. In such case, it is necessary to determine the variance and then, related strictly to it, the uncertainty of model parameters. The application of the interval arithmetic formalism allows to estimate the input data uncertainties without the necessity of the determination their probability distribution, which is required by other methods of uncertainty assessment. A successive problem in the acoustic hazards estimation is a lack of the exact knowledge of the input parameters. In connection with the above, the analysis of the modelling uncertainty in dependence of inaccuracy of model parameters was performed. To achieve this aim the interval arithmetic formalism – representing the value and its uncertainty in a form of an interval – was applied. The proposed approach was illustrated by the example of the application the Dutch RMR SRM Method, recommended by the European Union Directive 2002/49/WE, in the railway noise modelling.
Estimating uncertainty of data limited stock assessments
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro
2017-01-01
Many methods exist to assess the fishing status of data-limited stocks; however, little is known about the accuracy or the uncertainty of such assessments. Here we evaluate a new size-based data-limited stock assessment method by applying it to well-assessed, data-rich fish stocks treated as data......-limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...... assessments remarkably well and the official assessments fell well within the uncertainty bounds. The estimation of spawning stock biomass followed the same trends as the official assessment, but not the same levels. We conclude that the data-limited assessment method can be used for stock assessment...
Uncertainty Measures of Regional Flood Frequency Estimators
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik
1995-01-01
Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...
Estimation of measurement uncertainty arising from manual sampling of fuels.
Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios
2013-02-15
Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty. Copyright © 2012 Elsevier B.V. All rights reserved.
Uncertainties in the estimation of Mmax
Indian Academy of Sciences (India)
40% using the three catalogues compiled based on different magnitude conversion relationships. The effect of the uncertainties has been then shown on the estimation of Mmax and the prob- abilities of occurrence of different magnitudes. It has been emphasized to consider the uncer- tainties and their quantification to carry ...
Uncertainty relations for approximation and estimation
Energy Technology Data Exchange (ETDEWEB)
Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
2016-05-27
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...
Uncertainty estimations for quantitative in vivo MRI T1 mapping
Polders, Daniel L.; Leemans, Alexander; Luijten, Peter R.; Hoogduin, Hans
2012-11-01
Mapping the longitudinal relaxation time (T1) of brain tissue is of great interest for both clinical research and MRI sequence development. For an unambiguous interpretation of in vivo variations in T1 images, it is important to understand the degree of variability that is associated with the quantitative T1 parameter. This paper presents a general framework for estimating the uncertainty in quantitative T1 mapping by combining a slice-shifted multi-slice inversion recovery EPI technique with the statistical wild-bootstrap approach. Both simulations and experimental analyses were performed to validate this novel approach and to evaluate the estimated T1 uncertainty in several brain regions across four healthy volunteers. By estimating the T1 uncertainty, it is shown that the variation in T1 within anatomic regions for similar tissue types is larger than the uncertainty in the measurement. This indicates that heterogeneity of the inspected tissue and/or partial volume effects can be the main determinants for the observed variability in the estimated T1 values. The proposed approach to estimate T1 and its uncertainty without the need for repeated measurements may also prove to be useful for calculating effect sizes that are deemed significant when comparing group differences.
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Uncertainty estimation in finite fault inversion
Dettmer, Jan; Cummins, Phil R.; Benavente, Roberto
2016-04-01
This work considers uncertainty estimation for kinematic rupture models in finite fault inversion by Bayesian sampling. Since the general problem of slip estimation on an unknown fault from incomplete and noisy data is highly non-linear and currently intractable, assumptions are typically made to simplify the problem. These almost always include linearization of the time dependence of rupture by considering multiple discrete time windows, and a tessellation of the fault surface into a set of 'subfaults' whose dimensions are fixed below what is subjectively thought to be resolvable by the data. Even non-linear parameterizations are based on a fixed discretization. This results in over-parametrized models which include more parameters than resolvable by the data and require regularization criteria that stabilize the inversion. While it is increasingly common to consider slip uncertainties arising from observational error, the effects of the assumptions implicit in parameterization choices are rarely if ever considered. Here, we show that linearization and discretization assumptions can strongly affect both slip and uncertainty estimates and that therefore the selection of parametrizations should be included in the inference process. We apply Bayesian model selection to study the effect of parametrization choice on inversion results. The Bayesian sampling method which produces inversion results is based on a trans-dimensional rupture discretization which adapts the spatial and temporal parametrization complexity based on data information and does not require regularization. Slip magnitude, direction and rupture velocity are unknowns across the fault and causal first rupture times are obtained by solving the Eikonal equation for a spatially variable rupture-velocity field. The method provides automated local adaptation of rupture complexity based on data information and does not assume globally constant resolution. This is an important quality since seismic data do not
Motion estimation under location uncertainty for turbulent fluid flows
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
Sources of uncertainty in annual forest inventory estimates
Ronald E. McRoberts
2000-01-01
Although design and estimation aspects of annual forest inventories have begun to receive considerable attention within the forestry and natural resources communities, little attention has been devoted to identifying the sources of uncertainty inherent in these systems or to assessing the impact of those uncertainties on the total uncertainties of inventory estimates....
Parameter estimation uncertainty: Comparing apples and apples?
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests
A novel workflow for seismic net pay estimation with uncertainty
Glinsky, Michael E.; Baptiste, Dale; Unaldi, Muhlis; Nagassar, Vishal
2016-01-01
This paper presents a novel workflow for seismic net pay estimation with uncertainty. It is demonstrated on the Cassra/Iris Field. The theory for the stochastic wavelet derivation (which estimates the seismic noise level along with the wavelet, time-to-depth mapping, and their uncertainties), the stochastic sparse spike inversion, and the net pay estimation (using secant areas) along with its uncertainty; will be outlined. This includes benchmarking of this methodology on a synthetic model. A...
Estimating real-time predictive hydrological uncertainty
Verkade, J.S.
2015-01-01
Flood early warning systems provide a potentially highly effective flood risk reduction measure. The effectiveness of early warning, however, is affected by forecasting uncertainty: the impossibility of knowing, in advance, the exact future state of hydrological systems. Early warning systems
Transferring model uncertainty estimates from gauged to ungauged catchments
Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L.
2014-07-01
Predicting streamflow hydrographs in ungauged catchments is a challenging issue, and accompanying the estimates with realistic uncertainty bounds is an even more complex task. In this paper, we present a method to transfer model uncertainty estimates from gauged to ungauged catchments and we test it over a set of 907 catchments located in France. We evaluate the quality of the uncertainty estimates based on three expected qualities: reliability, sharpness, and overall skill. Our results show that the method holds interesting perspectives, providing in most cases reliable and sharp uncertainty bounds at ungauged locations.
Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings
Payzan-LeNestour, Elise; Bossaerts, Peter
2011-01-01
Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating. PMID:21283774
Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.
Directory of Open Access Journals (Sweden)
Elise Payzan-LeNestour
Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the 57Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s-1 and 0.04-0.05 mm s-1, respectively, the latter being close to the average experimental uncertainty of 0.02 mm s-1. Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r2, or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical Mössbauer spectroscopy
Traceability and uncertainty estimation in coordinate metrology
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Savio, Enrico; De Chiffre, Leonardo
2001-01-01
are required. Depending on the requirements for uncertainty level, different approaches may be adopted to achieve traceability. Especially in the case of complex measurement situations and workpieces the procedures are not trivial. This paper discusses the establishment of traceability in coordinate metrology...
Uncertainty in Forest Net Present Value Estimations
Directory of Open Access Journals (Sweden)
Ilona Pietilä
2010-09-01
Full Text Available Uncertainty related to inventory data, growth models and timber price fluctuation was investigated in the assessment of forest property net present value (NPV. The degree of uncertainty associated with inventory data was obtained from previous area-based airborne laser scanning (ALS inventory studies. The study was performed, applying the Monte Carlo simulation, using stand-level growth and yield projection models and three alternative rates of interest (3, 4 and 5%. Timber price fluctuation was portrayed with geometric mean-reverting (GMR price models. The analysis was conducted for four alternative forest properties having varying compartment structures: (A a property having an even development class distribution, (B sapling stands, (C young thinning stands, and (D mature stands. Simulations resulted in predicted yield value (predicted NPV distributions at both stand and property levels. Our results showed that ALS inventory errors were the most prominent source of uncertainty, leading to a 5.1–7.5% relative deviation of property-level NPV when an interest rate of 3% was applied. Interestingly, ALS inventory led to significant biases at the property level, ranging from 8.9% to 14.1% (3% interest rate. ALS inventory-based bias was the most significant in mature stand properties. Errors related to the growth predictions led to a relative standard deviation in NPV, varying from 1.5% to 4.1%. Growth model-related uncertainty was most significant in sapling stand properties. Timber price fluctuation caused the relative standard deviations ranged from 3.4% to 6.4% (3% interest rate. The combined relative variation caused by inventory errors, growth model errors and timber price fluctuation varied, depending on the property type and applied rates of interest, from 6.4% to 12.6%. By applying the methodology described here, one may take into account the effects of various uncertainty factors in the prediction of forest yield value and to supply the
Uncertainty in ERP Effort Estimation: A Challenge or an Asset?
Daneva, Maia; Wettflower, Seanna; de Boer, Sonia; Dumke, R.; Braungarten, B.; Bueren, G.; Abran, A.; Cuadrado-Gallego, J.
2008-01-01
Traditionally, software measurement literature considers the uncertainty of cost drivers in project estimation as a challenge and treats it as such. This paper develops the position that uncertainty can be seen as an asset. It draws on results of a case study in which we replicated an approach to
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Barbina, Maria; Fajgelj, Ales; Jacimovic, Radojko; Jeran, Zvonka; Menegon, Sandro; Pati, Alessandra; Petruzzelli, Giannantonio; Sansone, Umberto; Van der Perk, Marcel
2008-01-01
In the frame of the international SOILSAMP project, funded and coordinated by the National Environmental Protection Agency of Italy (APAT), uncertainties due to field soil sampling were assessed. Three different sampling devices were applied in an agricultural area using the same sampling protocol. Cr, Sc and Zn mass fractions in the collected soil samples were measured by k(0)-instrumental neutron activation analysis (k(0)-INAA). For each element-device combination the experimental variograms were calculated using geostatistical tools. The variogram parameters were used to estimate the standard uncertainty arising from sampling. The sampling component represents the dominant contribution of the measurement uncertainty with a sampling uncertainty to measurement uncertainty ratio ranging between 0.6 and 0.9. The approach based on the use of variogram parameters leads to uncertainty values of the sampling component in agreement with those estimated by replicate sampling approach.
Some methods of estimating uncertainty in accident reconstruction
Batista, Milan
2011-01-01
In the paper four methods for estimating uncertainty in accident reconstruction are discussed: total differential method, extreme values method, Gauss statistical method, and Monte Carlo simulation method. The methods are described and the program solutions are given.
Costs of sea dikes - regressions and uncertainty estimates
National Research Council Canada - National Science Library
Lenk, Stephan; Rybski, Diego; Heidrich, Oliver; Dawson, Richard J; Kropp, Jürgen P
2017-01-01
... – probabilistic functions of dike costs. Data from Canada and the Netherlands are analysed and related to published studies from the US, UK, and Vietnam in order to provide a reproducible estimate of typical sea dike costs and their uncertainty...
ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES
Energy Technology Data Exchange (ETDEWEB)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)
2012-05-20
In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.
Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements
Directory of Open Access Journals (Sweden)
Mohammad Sadeghi Sarcheshmah
2012-01-01
Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.
The duplicate method of uncertainty estimation: are eight targets enough?
Lyn, Jennifer A; Ramsey, Michael H; Coad, D Stephen; Damant, Andrew P; Wood, Roger; Boon, Katy A
2007-11-01
This paper presents methods for calculating confidence intervals for estimates of sampling uncertainty (s(samp)) and analytical uncertainty (s(anal)) using the chi-squared distribution. These uncertainty estimates are derived from application of the duplicate method, which recommends a minimum of eight duplicate samples. The methods are applied to two case studies--moisture in butter and nitrate in lettuce. Use of the recommended minimum of eight duplicate samples is justified for both case studies as the confidence intervals calculated using greater than eight duplicates did not show any appreciable reduction in width. It is considered that eight duplicates provide estimates of uncertainty that are both acceptably accurate and cost effective.
Transferring global uncertainty estimates from gauged to ungauged catchments
Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L.
2015-05-01
Predicting streamflow hydrographs in ungauged catchments is challenging, and accompanying the estimates with realistic uncertainty bounds is an even more complex task. In this paper, we present a method to transfer global uncertainty estimates from gauged to ungauged catchments and we test it over a set of 907 catchments located in France, using two rainfall-runoff models. We evaluate the quality of the uncertainty estimates based on three expected qualities: reliability, sharpness, and overall skill. The robustness of the method to the availability of information on gauged catchments was also evaluated using a hydrometrical desert approach. Our results show that the method presents advantageous perspectives, providing reliable and sharp uncertainty bounds at ungauged locations in a majority of cases.
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.
2007-01-01
Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models of th...... models, is applied to two different sets of measured plant data. The computed uncertainty bounds cover the measured plant output, while the nominal prediction is outside these uncertainty bounds for some samples in these examples. ......Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models...... of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...
Uncertainty Analysis of the Estimated Risk in Formal Safety Assessment
Directory of Open Access Journals (Sweden)
Molin Sun
2018-01-01
Full Text Available An uncertainty analysis is required to be carried out in formal safety assessment (FSA by the International Maritime Organization. The purpose of this article is to introduce the uncertainty analysis technique into the FSA process. Based on the uncertainty identification of input parameters, probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. An approach which combines the Monte Carlo random sampling of probability distribution functions with the a-cuts for fuzzy calculus is proposed to propagate the uncertainties. One output of the FSA process is societal risk (SR, which can be evaluated in the two-dimensional frequency–fatality (FN diagram. Thus, the confidence-level-based SR is presented to represent the uncertainty of SR in two dimensions. In addition, a method for time window selection is proposed to estimate the magnitude of uncertainties, which is an important aspect of modeling uncertainties. Finally, a case study is carried out on an FSA study on cruise ships. The results show that the uncertainty analysis of SR generates a two-dimensional area for a certain degree of confidence in the FN diagram rather than a single FN curve, which provides more information to authorities to produce effective risk control measures.
Uncertainty estimation for map-based analyses
Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker
2010-01-01
Traditionally, natural resource managers have asked the question, âHow much?â and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question, âWhere?â and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases, access to...
The effects of communicating uncertainty in quantitative health risk estimates.
Longman, Thea; Turner, Robin M; King, Madeleine; McCaffery, Kirsten J
2012-11-01
To examine the effects of communicating uncertainty in quantitative health risk estimates on participants' understanding, risk perception and perceived credibility of risk information source. 120 first year psychology students were given a hypothetical health-care scenario, with source of risk information (clinician, pharmaceutical company) varied between subjects and uncertainty (point, small range and large range risk estimate format) varied within subjects. The communication of uncertainty in the form of both a small and large range resulted in a reduction in accurate understanding and increased perceptions of risk when a large range was communicated compared to a point estimate. It also reduced perceptions of credibility of the information source, though for the clinician this was only the case when a large range was presented. The findings suggest that even for highly educated adults, communicating uncertainty as a range risk estimate has the potential to negatively affect understanding, increase risk perceptions and decrease perceived credibility. Communicating uncertainty in risk using a numeric range should be carefully considered by health-care providers. More research is needed to develop alternative strategies to effectively communicate the uncertainty in health risks to consumers. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Uncertainty analysis for estimates of the first indirect aerosol effect
Directory of Open Access Journals (Sweden)
Y. Chen
2005-01-01
Full Text Available The IPCC has stressed the importance of producing unbiased estimates of the uncertainty in indirect aerosol forcing, in order to give policy makers as well as research managers an understanding of the most important aspects of climate change that require refinement. In this study, we use 3-D meteorological fields together with a radiative transfer model to examine the spatially-resolved uncertainty in estimates of the first indirect aerosol forcing. The global mean forcing calculated in the reference case is -1.30 Wm-2. Uncertainties in the indirect forcing associated with aerosol and aerosol precursor emissions, aerosol mass concentrations from different chemical transport models, aerosol size distributions, the cloud droplet parameterization, the representation of the in-cloud updraft velocity, the relationship between effective radius and volume mean radius, cloud liquid water content, cloud fraction, and the change in the cloud drop single scattering albedo due to the presence of black carbon are calculated. The aerosol burden calculated by chemical transport models and the cloud fraction are found to be the most important sources of uncertainty. Variations in these parameters cause an underestimation or overestimation of the indirect forcing compared to the base case by more than 0.6 Wm-2. Uncertainties associated with aerosol and aerosol precursor emissions, uncertainties in the representation of the aerosol size distribution (including the representation of the pre-industrial size distribution, and uncertainties in the representation of cloud droplet spectral dispersion effect cause uncertainties in the global mean forcing of 0.2~0.6 Wm-2. There are significant regional differences in the uncertainty associated with the first indirect forcing with the largest uncertainties in industrial regions (North America, Europe, East Asia followed by those in the major biomass burning regions.
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-06-01
Full Text Available The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Park, Jae Phil; Bahn, Chi Bum
2016-06-27
The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Uncertainty Model For Quantitative Precipitation Estimation Using Weather Radars
Directory of Open Access Journals (Sweden)
Ernesto Gómez Vargas
2016-06-01
Full Text Available This paper introduces an uncertainty model for the quantitatively estimate precipitation using weather radars. The model considers various key aspects associated to radar calibration, attenuation, and the tradeoff between accuracy and radar coverage. An S-band-radar case study is presented to illustrate particular fractional-uncertainty calculations obtained to adjust various typical radar-calibration elements such as antenna, transmitter, receiver, and some other general elements included in the radar equation. This paper is based in “Guide to the expression of Uncertainty in measurement” and the results show that the fractional uncertainty calculated by the model was 40 % for the reflectivity and 30% for the precipitation using the Marshall Palmer Z-R relationship.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence...... to the parameter estimation an uncertainty analysis of the estimated data and a comparison to other methods is performed. A thorough uncertainty analysis provides information about the prediction error, which is important for the use of the data in process safety studies and assessments. The method considers...
A bootstrap method for estimating uncertainty of water quality trends
Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura
2015-01-01
Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
Considerations for interpreting probabilistic estimates of uncertainty of forest carbon
James E. Smith; Linda S. Heath
2000-01-01
Quantitative estimated of carbon inventories are needed as part of nationwide attempts to reduce net release of greenhouse gases and the associated climate forcing. Naturally, an appreciable amount of uncertainty is inherent in such large-scale assessments, especially since both science and policy issues are still evolving. Decision makers need an idea of the...
Uncertainty of Areal Rainfall Estimation Using Point Measurements
McCarthy, D.; Dotto, C. B. S.; Sun, S.; Bertrand-Krajewski, J. L.; Deletic, A.
2014-12-01
The spatial variability of precipitation has a great influence on the quantity and quality of runoff water generated from hydrological processes. In practice, point rainfall measurements (e.g., rain gauges) are often used to represent areal rainfall in catchments. The spatial rainfall variability is difficult to be precisely captured even with many rain gauges. Thus the rainfall uncertainty due to spatial variability should be taken into account in order to provide reliable rainfall-driven process modelling results. This study investigates the uncertainty of areal rainfall estimation due to rainfall spatial variability if point measurements are applied. The areal rainfall is usually estimated as a weighted sum of data from available point measurements. The expected error of areal rainfall estimates is 0 if the estimation is an unbiased one. The variance of the error between the real and estimated areal rainfall is evaluated to indicate the uncertainty of areal rainfall estimates. This error variance can be expressed as a function of variograms, which was originally applied in geostatistics to characterize a spatial variable. The variogram can be evaluated using measurements from a dense rain gauge network. The areal rainfall errors are evaluated in two areas with distinct climate regimes and rainfall patterns: Greater Lyon area in France and Melbourne area in Australia. The variograms of the two areas are derived based on 6-minute rainfall time series data from 2010 to 2013 and are then used to estimate uncertainties of areal rainfall represented by different numbers of point measurements in synthetic catchments of various sizes. The error variance of areal rainfall using one point measurement in the centre of a 1-km2 catchment is 0.22 (mm/h)2 in Lyon. When the point measurement is placed at one corner of the same-size catchment, the error variance becomes 0.82 (mm/h)2 also in Lyon. Results for Melbourne were similar but presented larger uncertainty. Results
Gaussian process interpolation for uncertainty estimation in image registration.
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.
Experimental Demonstration of a Cheap and Accurate Phase Estimation
Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; Maunz, Peter
2017-05-01
We demonstrate an experimental implementation of robust phase estimation (RPE) to learn the phase of a single-qubit rotation on a trapped Yb+ ion qubit. We show this phase can be estimated with an uncertainty below 4 ×10-4 rad using as few as 176 total experimental samples, and our estimates exhibit Heisenberg scaling. Unlike standard phase estimation protocols, RPE neither assumes perfect state preparation and measurement, nor requires access to ancillae. We crossvalidate the results of RPE with the more resource-intensive protocol of gate set tomography.
REDD+ emissions estimation and reporting: dealing with uncertainty
Pelletier, Johanne; Martin, Davy; Potvin, Catherine
2013-09-01
The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology
A novel workflow for seismic net pay estimation with uncertainty
Glinsky, Michael E; Unaldi, Muhlis; Nagassar, Vishal
2016-01-01
This paper presents a novel workflow for seismic net pay estimation with uncertainty. It is demonstrated on the Cassra/Iris Field. The theory for the stochastic wavelet derivation (which estimates the seismic noise level along with the wavelet, time-to-depth mapping, and their uncertainties), the stochastic sparse spike inversion, and the net pay estimation (using secant areas) along with its uncertainty; will be outlined. This includes benchmarking of this methodology on a synthetic model. A critical part of this process is the calibration of the secant areas. This is done in a two step process. First, a preliminary calibration is done with the stochastic reflection response modeling using rock physics relationships derived from the well logs. Second, a refinement is made to the calibration to account for the encountered net pay at the wells. Finally, a variogram structure is estimated from the extracted secant area map, then used to build in the lateral correlation to the ensemble of net pay maps while matc...
Estimation uncertainty of direct monetary flood damage to buildings
Directory of Open Access Journals (Sweden)
B. Merz
2004-01-01
Full Text Available Traditional flood design methods are increasingly supplemented or replaced by risk-oriented methods which are based on comprehensive risk analyses. Besides meteorological, hydrological and hydraulic investigations such analyses require the estimation of flood impacts. Flood impact assessments mainly focus on direct economic losses using damage functions which relate property damage to damage-causing factors. Although the flood damage of a building is influenced by many factors, usually only inundation depth and building use are considered as damage-causing factors. In this paper a data set of approximately 4000 damage records is analysed. Each record represents the direct monetary damage to an inundated building. The data set covers nine flood events in Germany from 1978 to 1994. It is shown that the damage data follow a Lognormal distribution with a large variability, even when stratified according to the building use and to water depth categories. Absolute depth-damage functions which relate the total damage to the water depth are not very helpful in explaining the variability of the damage data, because damage is determined by various parameters besides the water depth. Because of this limitation it has to be expected that flood damage assessments are associated with large uncertainties. It is shown that the uncertainty of damage estimates depends on the number of flooded buildings and on the distribution of building use within the flooded area. The results are exemplified by a damage assessment for a rural area in southwest Germany, for which damage estimates and uncertainty bounds are quantified for a 100-year flood event. The estimates are compared to reported flood damages of a severe flood in 1993. Given the enormous uncertainty of flood damage estimates the refinement of flood damage data collection and modelling are major issues for further empirical and methodological improvements.
Handling uncertainty in quantitative estimates in integrated resource planning
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Wagner, C.G. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics
1995-01-01
This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Energy Technology Data Exchange (ETDEWEB)
Helgesson, P., E-mail: petter.helgesson@physics.uu.se [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Sjöstrand, H. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Rydén, J. [Department of Mathematics, Uppsala University, Uppsala (Sweden); Rochman, D. [Paul Scherrer Institute PSI, Villigen (Switzerland); Alhassan, E.; Pomp, S. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden)
2016-01-21
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Directory of Open Access Journals (Sweden)
Višnjevac Nenad
2013-01-01
Full Text Available Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simulations to estimate the uncertainty of flow accumulation. The case study area included the broader area of the Municipality of Čačak, where Monte Carlo stochastic simulations were applied in order to create one hundred possible outputs of flow accumulation. A statistical analysis was performed on the basis of these versions, and the "most likely" version of flow accumulation in association with its confidence bounds (standard deviation was created. Further, this paper describes the most important phases in the process of estimating uncertainty, such as variogram modelling and chooses the right number of simulations. Finally, it makes suggestions on how to effectively use and discuss the results and their practical significance.
Eigenspace perturbations for structural uncertainty estimation of turbulence closure models
Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca
2017-11-01
With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).
Uncertainty in geocenter estimates in the context of ITRF2014
Riddell, Anna R.; King, Matt A.; Watson, Christopher S.; Sun, Yu; Riva, Riccardo E. M.; Rietbroek, Roelof
2017-05-01
Uncertainty in the geocenter position and its subsequent motion affects positioning estimates on the surface of the Earth and downstream products such as site velocities, particularly the vertical component. The current version of the International Terrestrial Reference Frame, ITRF2014, derives its origin as the long-term averaged center of mass as sensed by satellite laser ranging (SLR), and by definition, it adopts only linear motion of the origin with uncertainty determined using a white noise process. We compare weekly SLR translations relative to the ITRF2014 origin, with network translations estimated from station displacements from surface mass transport models. We find that the proportion of variance explained in SLR translations by the model-derived translations is on average less than 10%. Time-correlated noise and nonlinear rates, particularly evident in the Y and Z components of the SLR translations with respect to the ITRF2014 origin, are not fully replicated by the model-derived translations. This suggests that translation-related uncertainties are underestimated when a white noise model is adopted and that substantial systematic errors remain in the data defining the ITRF origin. When using a white noise model, we find uncertainties in the rate of SLR X, Y, and Z translations of ±0.03, ±0.03, and ±0.06, respectively, increasing to ±0.13, ±0.17, and ±0.33 (mm/yr, 1 sigma) when a power law and white noise model is adopted.
A Study on Crack Initiation Test Condition by Uncertainty Evaluation of Weibull Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The goal of this work is to suggest proper experimental conditions for experimenters who want to develop probabilistic SCC initiation model by cracking test. Widely used MRR and MLE are considered as estimation methods of Weibull distribution. Stress Corrosion Cracking (SCC) is one of the main materials-related issues in operating nuclear reactors. From the result of the test, experimenters can estimate the parameters of Weibull distribution by Maximum Likelihood Estimation (MLE) or Median Rank Regression (MRR). However, in order to obtain the sufficient accuracy of the estimated Weibull model, it is hard for experimenters to determine the proper number of test specimens and censoring intervals. In this work, a comparison of MLE and MRR is performed by Monte Carlo simulation to quantify the effect of total number of specimen, test duration, censoring interval and shape parameter of the assumed true Weibull distribution. By using a Monte Carlo simulation, uncertainties of MRR and MLE estimators were quantified in various conditions of experimental cases. The following conclusions could be informative for the experimenters: 1) For all range of the simulation study, estimated scale parameters were more reliable than estimated shape parameters, especially for at high βtrue. 2) It is likely that the shape parameter is overestimated when the number of specimen is less than 25. For scale parameter estimation, MLE estimators have small bias as compared to the MRR estimators.
Estimating abundance in the presence of species uncertainty
Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.
2016-01-01
1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to
Hullman, Jessica; Kay, Matthew; Kim, Yea-Seul; Shrestha, Samana
2017-08-29
People often have erroneous intuitions about the results of uncertain processes, such as scientific experiments. Many uncertainty visualizations assume considerable statistical knowledge, but have been shown to prompt erroneous conclusions even when users possess this knowledge. Active learning approaches been shown to improve statistical reasoning, but are rarely applied in visualizing uncertainty in scientific reports. We present a controlled study to evaluate the impact of an interactive, graphical uncertainty prediction technique for communicating uncertainty in experiment results. Using our technique, users sketch their prediction of the uncertainty in experimental effects prior to viewing the true sampling distribution from an experiment. We find that having a user graphically predict the possible effects from experiment replications is an effective way to improve one's ability to make predictions about replications of new experiments. Additionally, visualizing uncertainty as a set of discrete outcomes, as opposed to a continuous probability distribution, can improve recall of a sampling distribution from a single experiment. Our work has implications for various applications where it is important to elicit peoples' estimates of probability distributions and to communicate uncertainty effectively.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Measurement Uncertainty Estimation of a Robust Photometer Circuit
Directory of Open Access Journals (Sweden)
Jesús de Vicente
2009-04-01
Full Text Available In this paper the uncertainty of a robust photometer circuit (RPC was estimated. Here, the RPC was considered as a measurement system, having input quantities that were inexactly known, and output quantities that consequently were also inexactly known. Input quantities represent information obtained from calibration certificates, specifications of manufacturers, and tabulated data. Output quantities describe the transfer function of the electrical part of the photodiode. Input quantities were the electronic components of the RPC, the parameters of the model of the photodiode and its sensitivity at 670 nm. The output quantities were the coefficients of both numerator and denominator of the closed-loop transfer function of the RPC. As an example, the gain and phase shift of the RPC versus frequency was evaluated from the transfer function, with their uncertainties and correlation coefficient. Results confirm the robustness of photodiode design.
Communicating the uncertainty in estimated greenhouse gas emissions from agriculture.
Milne, Alice E; Glendining, Margaret J; Lark, R Murray; Perryman, Sarah A M; Gordon, Taylor; Whitmore, Andrew P
2015-09-01
In an effort to mitigate anthropogenic effects on the global climate system, industrialised countries are required to quantify and report, for various economic sectors, the annual emissions of greenhouse gases from their several sources and the absorption of the same in different sinks. These estimates are uncertain, and this uncertainty must be communicated effectively, if government bodies, research scientists or members of the public are to draw sound conclusions. Our interest is in communicating the uncertainty in estimates of greenhouse gas emissions from agriculture to those who might directly use the results from the inventory. We tested six methods of communication. These were: a verbal scale using the IPCC calibrated phrases such as 'likely' and 'very unlikely'; probabilities that emissions are within a defined range of values; confidence intervals for the expected value; histograms; box plots; and shaded arrays that depict the probability density of the uncertain quantity. In a formal trial we used these methods to communicate uncertainty about four specific inferences about greenhouse gas emissions in the UK. Sixty four individuals who use results from the greenhouse gas inventory professionally participated in the trial, and we tested how effectively the uncertainty about these inferences was communicated by means of a questionnaire. Our results showed differences in the efficacy of the methods of communication, and interactions with the nature of the target audience. We found that, although the verbal scale was thought to be a good method of communication it did not convey enough information and was open to misinterpretation. Shaded arrays were similarly criticised for being open to misinterpretation, but proved to give the best impression of uncertainty when participants were asked to interpret results from the greenhouse gas inventory. Box plots were most favoured by our participants largely because they were particularly favoured by those who worked
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
Sensitivity of Process Design due to Uncertainties in Property Estimates
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent
2012-01-01
The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design.......The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels...... of accuracy for different thermo-physical property prediction models; and c) design variables versus properties relationships. The application of the methodology is illustrated through a case study of an extractive distillation process and sensitivity analysis of designs of various unit operations found...
Iso-uncertainty control in an experimental fluoroscopy system.
Siddique, S; Fiume, E; Jaffray, D A
2014-12-01
X-ray fluoroscopy remains an important imaging modality in a number of image-guided procedures due to its real-time nature and excellent spatial detail. However, the radiation dose delivered raises concerns about its use particularly in lengthy treatment procedures (>0.5 h). The authors have previously presented an algorithm that employs feedback of geometric uncertainty to control dose while maintaining a desired targeting uncertainty during fluoroscopic tracking of fiducials. The method was tested using simulations of motion against controlled noise fields. In this paper, the authors embody the previously reported method in a physical prototype and present changes to the controller required to function in a practical setting. The metric for feedback used in this study is based on the trace of the covariance of the state of the system, tr(C). The state is defined here as the 2D location of a fiducial on a plane parallel to the detector. A relationship between this metric and the tube current is first developed empirically. This relationship is extended to create a manifold that incorporates a latent variable representing the estimated background attenuation. The manifold is then used within the controller to dynamically adjust the tube current and maintain a specified targeting uncertainty. To evaluate the performance of the proposed method, an acrylic sphere (1.6 mm in diameter) was tracked at tube currents ranging from 0.5 to 0.9 mA (0.033 s) at a fixed energy of 80 kVp. The images were acquired on a Varian Paxscan 4030A (2048 × 1536 pixels, ∼ 100 cm source-to-axis distance, ∼ 160 cm source-to-detector distance). The sphere was tracked using a particle filter under two background conditions: (1) uniform sheets of acrylic and (2) an acrylic wedge. The measured tr(C) was used in conjunction with a learned manifold to modulate the tube current in order to maintain a specified uncertainty as the sphere traversed regions of varying thickness corresponding to the
Estimation of Model Uncertainties in Closed-loop Systems
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Poulsen, Niels Kjølstad
2008-01-01
This paper describe a method for estimation of parameters or uncertainties in closed-loop systems. The method is based on an application of the dual YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all systems stabilized by a given controller. The dual YJBK transfer function...... is a measure for the variation in the system seen through the feedback controller. It is shown that it is possible to isolate a certain number of parameters or uncertain blocks in the system exactly. This is obtained by modifying the feedback controller through the YJBK transfer function together with pre...
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES
Energy Technology Data Exchange (ETDEWEB)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ, 85734 (United States)
2013-09-20
The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.
Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops
Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said
2017-11-01
The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.
Experimental Realization of Popper's Experiment: Violation of Uncertainty Principle?
Kim, Yoon-Ho; Yu, Rong; Shih, Yanhua
An entangled pair of photon 1 and 2 are emitted in opposite directions along the positive and negative x-axis. A narrow slit is placed in the path of photon 1 which provides precise knowledge about its position along the y-axis and because of the quantum entanglement this in turn provides precise knowledge of the position y of its twin, photon 2. Does photon 2 experience a greater uncertainty in its momentum, i.e., a greater Δpy, due to the precise knowledge of its position y? This is the historical thought experiment of Sir Karl Popper which was aimed to undermine the Copenhagen interpretation in favor of a realistic viewpoint of quantum mechanics. Thispaper reports an experimental realization of the Popper's experiment. One may not agree with Popper's position on quantum mechanics; however, it calls for a correct understanding and interpretation of the experimental results.
Intolerance of Uncertainty: A Temporary Experimental Induction Procedure
Mosca, Oriana; Lauriola, Marco; Carleton, R. Nicholas
2016-01-01
Background and Objectives Intolerance of uncertainty (IU) is a trans-diagnostic construct involved in anxiety and related disorders. Research focused on cross-sectional reporting, manipulating attitudes toward objective and impersonal events or on treatments designed to reduce IU in clinical populations. The current paper presents an experimental procedure for laboratory manipulations of IU and tests mediation hypotheses following the Intolerance of Uncertainty Model. Methods On pre-test, undergraduate volunteers (Study 1, n = 43;68% women. Study 2, n = 169;83.8% women) were asked to provide an idiosyncratic future negative life event. State-IU, Worry, Positive and Negative Affect were assessed after that a standardized procedure was used to identify event’s potential negative consequences. The same variables were assessed on post-test, after that participants were asked to read-through increasing and decreasing IU statements. Results Temporary changes on IU were consistently reproduced in both studies. Participants receiving increasing IU instructions reported greater state-IU, Worry and Negative Affect than those receiving decreasing IU instructions. However, this latter condition was not different from a control one (Study 2). Both studies revealed significant indirect effects of IU induction instructions on Worry and Negative Affect through state-IU. Limitations Both studies used undergraduate psychology students samples, younger than average population and predominantly female. Experimental manipulation and outcome measures belongs to the same semantic domain, uncertainty, potentially limiting generalizability. Conclusions Results supported the feasibility and efficacy of the proposed IU manipulation for non-clinical sample. Findings parallel clinical research showing that state-IU preceded Worry and Negative Affect states. PMID:27254099
Energy Technology Data Exchange (ETDEWEB)
Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson
2007-11-01
The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties.
Using interpolation to estimate system uncertainty in gene expression experiments.
Directory of Open Access Journals (Sweden)
Lee J Falin
Full Text Available The widespread use of high-throughput experimental assays designed to measure the entire complement of a cell's genes or gene products has led to vast stores of data that are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured intervals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within unmeasured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative (i.e. non-categorical treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements.
Uncertainty estimates for the Bayes Inference Engine, (BIE)
Energy Technology Data Exchange (ETDEWEB)
Beery, Thomas A [Los Alamos National Laboratory
2009-01-01
In the fall 2007 meeting of the BIB users group, two approaches to making uncertainty estimates were presented. Ken Hanson asserted that if the BFGS optimizer was used, the inverse Hessian matrix was the same as the covariance matrix representing parameter uncertainties. John Pang presented preliminary results of a Monte Carlo method called Randomized Maximum Likelihood (RML). The BFGS/Hessian matrix approach may be applied to the region of the 'ideal model' Approximately 250 parameters describing the object density patches that are varied to match an image of 1,000,000 pixels. I cast this in terms of least squares analysis, as it is much better understood. This not as large a conceptual jump as some suppose because many of the functional blocks in the BIB are taken directly from existing least squares programs. If a Gaussian (normal) probability density function is assumed for both the observation and parameter errors, the Bayesian and least squares result should be identical.
Parsekian, A D; Dlubac, K; Grunewald, E; Butler, J J; Knight, R; Walsh, D O
2015-01-01
Characterization of hydraulic conductivity (K) in aquifers is critical for evaluation, management, and remediation of groundwater resources. While estimates of K have been traditionally obtained using hydraulic tests over discrete intervals in wells, geophysical measurements are emerging as an alternative way to estimate this parameter. Nuclear magnetic resonance (NMR) logging, a technology once largely applied to characterization of deep consolidated rock petroleum reservoirs, is beginning to see use in near-surface unconsolidated aquifers. Using a well-known rock physics relationship-the Schlumberger Doll Research (SDR) equation--K and porosity can be estimated from NMR water content and relaxation time. Calibration of SDR parameters is necessary for this transformation because NMR relaxation properties are, in part, a function of magnetic mineralization and pore space geometry, which are locally variable quantities. Here, we present a statistically based method for calibrating SDR parameters that establishes a range for the estimated parameters and simultaneously estimates the uncertainty of the resulting K values. We used co-located logging NMR and direct K measurements in an unconsolidated fluvial aquifer in Lawrence, Kansas, USA to demonstrate that K can be estimated using logging NMR to a similar level of uncertainty as with traditional direct hydraulic measurements in unconsolidated sediments under field conditions. Results of this study provide a benchmark for future calibrations of NMR to obtain K in unconsolidated sediments and suggest a method for evaluating uncertainty in both K and SDR parameter values. © 2014, National Ground Water Association.
Uncertainty Estimation in SiGe HBT Small-Signal Modeling
DEFF Research Database (Denmark)
Masood, Syed M.; Johansen, Tom Keinicke; Vidkjær, Jens
2005-01-01
An uncertainty estimation and sensitivity analysis is performed on multi-step de-embedding for SiGe HBT small-signal modeling. The uncertainty estimation in combination with uncertainty model for deviation in measured S-parameters, quantifies the possible error value in de-embedded two-port param...
Chaparro Molano, G.; Restrepo Gaitán, O. A.; Cuervo Marulanda, J. C.; Torres Arzayus, S. A.
2018-01-01
Obtaining individual estimates for uncertainties in redshift-independent galaxy distance measurements can be challenging, as for each galaxy there can be many distance estimates with non-gaussian distributions, some of which may not even have a reported uncertainty. We seek to model uncertainties using a bootstrap sampling of measurements per galaxy per distance estimation method. We then create a predictive bayesian model for estimating galaxy distance uncertainties that is better than simply using a weighted standard deviation. This can be a first step toward predicting distance uncertainties for future catalog-wide analysis.
Uncertainties associated with parameter estimation in atmospheric infrasound arrays.
Szuberla, Curt A L; Olson, John V
2004-01-01
This study describes a method for determining the statistical confidence in estimates of direction-of-arrival and trace velocity stemming from signals present in atmospheric infrasound data. It is assumed that the signal source is far enough removed from the infrasound sensor array that a plane-wave approximation holds, and that multipath and multiple source effects are not present. Propagation path and medium inhomogeneities are assumed not to be known at the time of signal detection, but the ensemble of time delays of signal arrivals between array sensor pairs is estimable and corrupted by uncorrelated Gaussian noise. The method results in a set of practical uncertainties that lend themselves to a geometric interpretation. Although quite general, this method is intended for use by analysts interpreting data from atmospheric acoustic arrays, or those interested in designing and deploying them. The method is applied to infrasound arrays typical of those deployed as a part of the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization.
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John
2011-01-01
Mass discharge estimates are increasingly being used in the management of contaminated sites and uncertainties related to such estimates are therefore of great practical importance. We present a rigorous approach for quantifying the uncertainty in the mass discharge across a multilevel control...... plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...
GLUE Based Marine X-Band Weather Radar Data Calibration and Uncertainty Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Beven, Keith; Thorndahl, Søren Liedtke
2015-01-01
The Generalized Likelihood Uncertainty Estimation methodology (GLUE) is investigated for radar rainfall calibration and uncertainty assessment. The method is used to calibrate radar data collected by a Local Area Weather Radar (LAWR). In contrast to other LAWR data calibrations, the method combines...... calibration with uncertainty estimation. Instead of searching for a single set of calibration parameters, the method uses the observations to construct distributions of the calibration parameters. These parameter sets provide valuable knowledge of parameter sensitivity and the uncertainty. Two approaches...... improves the performance significantly. It is found that even if the dynamic adjustment method is used the uncertainty of rainfall estimates can still be significant....
Uncertainty estimation and reconstruction of historical streamflow records
Kuentz, A.; Mathevet, T.; Perret, C.; Andréassian, V.
2012-04-01
Long historical series of streamflow are a precious source of information in the context of hydrological studies, such as research of trends or breaks due to climate variability or anthropogenic influences. For this kind of studies, it could be very important to go back as far as possible in the past, in order to highlight information content of historical observations. During our research we concentrate on the Durance watershed (14000 km2) in order to understand last century (1900-2010) hydrological variability due to climate changes and/or anthropogenic influences. This watershed, situated in the Alps, is characterized by variable hydrological processes (from snowy to Mediterranean regimes) and a wide range of anthropogenic influences (hydropower generation, irrigation, industries, drinking water, etc.). We are convinced that this research is necessary before any climate and hydrological projection. Documentary researches lead in collaboration with a historian allowed to find about ten long streamflow series from the beginnings of the 20th century on the Durance watershed. The analysis of theses series is necessary to better understand the natural hydrological behavior of the watershed, before the development of most of the anthropogenic influences. If the usefulness of such long streamflow series is obvious, they have some limitations, one of them being their heterogeneity, which can have many origins: shift of the gauging station, changes in the anthropogenic influences, or evolution in the methods used to build the series. Before their interpretation in terms of climate or land use changes, uncertainty estimation of historical streamflow records is therefore very important to assess data quality and homogeneity over time. This paper focuses on the estimation of the historical streamflow records uncertainty due to the evolution of their construction methods. Since the beginnings of the 20th century, we have listed three main methods of construction of daily
DEFF Research Database (Denmark)
Luczak, Marcin; Peeters, Bart; Kahsin, Maciej
2014-01-01
Aerospace and wind energy structures are extensively using components made of composite materials. Since these structures are subjected to dynamic environments with time-varying loading conditions, it is important to model their dynamic behavior and validate these models by means of vibration...... for uncertainty evaluation in experimentally estimated models. Investigated structures are plates, fuselage panels and helicopter main rotor blades as they represent different complexity levels ranging from coupon, through sub-component up to fully assembled structures made of composite materials. To evaluate...
Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates
National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Directory of Open Access Journals (Sweden)
Danuta Owczarek
2015-08-01
Full Text Available The paper presents a method for estimating the uncertainty of optical coordinate measurement based on the use of information about the geometry and the size of measured object as well as information about the measurement system, i.e. maximum permissible error (MPE of the machine, selection of a sensor, and also the required measurement accuracy, the number of operators, measurement strategy and external conditions contained in the developed uncertainty database. Estimation of uncertainty is done with the use of uncertainties of measurements of basic geometry elements determined by methods available in the Laboratory of Coordinate Metrology at Cracow University of Technology (LCM CUT (multi-position, comparative and developed in the LCM CUT method dedicated for non-contact measurements and then with the use of them to determine the uncertainty of a given measured object. Research presented in this paper are aimed at developing a complete database containing all information needed to estimate the measurement uncertainty of various objects, even of a very complex geometry based on previously performed measurements.
Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.
Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V
2016-01-01
The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
Estimation of Uncertainties in the Global Distance Test (GDT_TS for CASP Models.
Directory of Open Access Journals (Sweden)
Wenlin Li
Full Text Available The Critical Assessment of techniques for protein Structure Prediction (or CASP is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs, increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
Bias and robustness of uncertainty components estimates in transient climate projections
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias
ANN Approach for State Estimation of Hybrid Systems and Its Experimental Validation
Directory of Open Access Journals (Sweden)
Shijoh Vellayikot
2015-01-01
Full Text Available A novel artificial neural network based state estimator has been proposed to ensure the robustness in the state estimation of autonomous switching hybrid systems under various uncertainties. Taking the autonomous switching three-tank system as benchmark hybrid model working under various additive and multiplicative uncertainties such as process noise, measurement error, process–model parameter variation, initial state mismatch, and hand valve faults, real-time performance evaluation by the comparison of it with other state estimators such as extended Kalman filter and unscented Kalman Filter was carried out. The experimental results reported with the proposed approach show considerable improvement in the robustness in performance under the considered uncertainties.
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Višnjevac Nenad; Cvijetinović Željko; Bajat Branislav; Radić Boris; Ristić Ratko; Milčanović Vukašin
2013-01-01
Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simul...
Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck
2010-05-01
The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.
Lahiri, B. B.; Ranoo, Surojit; Philip, John
2017-11-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the
Energy Technology Data Exchange (ETDEWEB)
Kristof, Marian [AiNS, Na hlinach 51, 917 01 Trnava (Slovakia); Department of Nuclear Physics and Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava (Slovakia)], E-mail: marian.kristof@ains.sk; Kliment, Tomas [VUJE a.s., Okruzna 5, 918 64 Trnava (Slovakia); Petruzzi, Alessandro [Nuclear Research Group of San Piero a Grado, University of Pisa (Italy); Lipka, Jozef [Department of Nuclear Physics and Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava (Slovakia)
2009-11-15
Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
Todini, E.
2001-01-01
This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the...
Estimation of Uncertainty in Risk Assessment of Hydrogen Applications
DEFF Research Database (Denmark)
Markert, Frank; Krymsky, V.; Kozine, Igor
2011-01-01
Hydrogen technologies such as hydrogen fuelled vehicles and refuelling stations are being tested in practice in a number of projects (e.g. HyFleet-Cute and Whistler project) giving valuable information on the reliability and maintenance requirements. In order to establish refuelling stations...... and extrapolations to be made. Therefore, the QRA results will contain varying degrees of uncertainty as some components are well established while others are not. The paper describes a methodology to evaluate the degree of uncertainty in data for hydrogen applications based on the bias concept of the total...... probability and the NUSAP concept to quantify uncertainties of new not fully qualified hydrogen technologies and implications to risk management....
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-12-01
Full Text Available It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Park, Jae Phil; Park, Chanseok; Cho, Jongweon; Bahn, Chi Bum
2016-12-23
It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals) could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.
2015-03-01
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).
Evaluation of uncertainty in field soil moisture estimations by cosmic-ray neutron sensing
Scheiffele, Lena Maria; Baroni, Gabriele; Schrön, Martin; Ingwersen, Joachim; Oswald, Sascha E.
2017-04-01
Cosmic-ray neutron sensing (CRNS) has developed into a valuable, indirect and non-invasive method to estimate soil moisture at the field scale. It provides continuous temporal data (hours to days), relatively large depth (10-70 cm), and intermediate spatial scale measurements (hundreds of meters), thereby overcoming some of the limitations in point measurements (e.g., TDR/FDR) and of remote sensing products. All these characteristics make CRNS a favorable approach for soil moisture estimation, especially for applications in cropped fields and agricultural water management. Various studies compare CRNS measurements to soil sensor networks and show a good agreement. However, CRNS is sensitive to more characteristics of the land-surface, e.g. additional hydrogen pools, soil bulk density, and biomass. Prior to calibration the standard atmospheric corrections are accounting for the effects of air pressure, humidity and variations in incoming neutrons. In addition, the standard calibration approach was further extended to account for hydrogen in lattice water and soil organic material. Some corrections were also proposed to account for water in biomass. Moreover, the sensitivity of the probe was found to decrease with distance and a weighting procedure for the calibration datasets was introduced to account for the sensors' radial sensitivity. On the one hand, all the mentioned corrections showed to improve the accuracy in estimated soil moisture values. On the other hand, they require substantial additional efforts in monitoring activities and they could inherently contribute to the overall uncertainty of the CRNS product. In this study we aim (i) to quantify the uncertainty in the field soil moisture estimated by CRNS and (ii) to understand the role of the different sources of uncertainty. To this end, two experimental sites in Germany were equipped with a CRNS probe and compared to values of a soil moisture network. The agricultural fields were cropped with winter
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Czech Academy of Sciences Publication Activity Database
Yu, X.; Lamačová, Anna; Duffy, Ch.; Krám, P.; Hruška, Jakub
2016-01-01
Roč. 90, part B (2016), s. 90-101 ISSN 0098-3004 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : Uncertainty * Evapotranspiration * Forest management * PIHM * Biome -BGC Subject RIV: EH - Ecology, Behaviour Impact factor: 2.533, year: 2016
Uncertainty related to Environmental Data and Estimated Extreme Events
DEFF Research Database (Denmark)
Burcharth, H. F.
The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertaintie...
Dalla Chiara, Maria Luisa
2010-09-01
In contemporary science uncertainty is often represented as an intrinsic feature of natural and of human phenomena. As an example we need only think of two important conceptual revolutions that occurred in physics and logic during the first half of the twentieth century: (1) the discovery of Heisenberg's uncertainty principle in quantum mechanics; (2) the emergence of many-valued logical reasoning, which gave rise to so-called 'fuzzy thinking'. I discuss the possibility of applying the notions of uncertainty, developed in the framework of quantum mechanics, quantum information and fuzzy logics, to some problems of political and social sciences.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Improving uncertainty estimates: Inter-annual variability in Ireland
Pullinger, D.; Zhang, M.; Hill, N.; Crutchley, T.
2017-11-01
This paper addresses the uncertainty associated with inter-annual variability used within wind resource assessments for Ireland in order to more accurately represent the uncertainties within wind resource and energy yield assessments. The study was undertaken using a total of 16 ground stations (Met Eireann) and corresponding reanalysis datasets to provide an update to previous work on this topic undertaken nearly 20 years ago. The results of the work demonstrate that the previously reported 5.4% of wind speed inter-annual variability is considered to be appropriate, guidance is given on how to provide a robust assessment of IAV using available sources of data including ground stations, MERRA-2 and ERA-Interim.
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo
2015-06-01
Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimation and uncertainty analysis of dose response in an inter-laboratory experiment
Toman, Blaza; Rösslein, Matthias; Elliott, John T.; Petersen, Elijah J.
2016-02-01
An inter-laboratory experiment for the evaluation of toxic effects of NH2-polystyrene nanoparticles on living human cancer cells was performed with five participating laboratories. Previously published results from nanocytoxicity assays are often contradictory, mostly due to challenges related to producing a reliable cytotoxicity assay protocol for use with nanomaterials. Specific challenges include reproducibility preparing nanoparticle dispersions, biological variability from testing living cell lines, and the potential for nano-related interference effects. In this experiment, such challenges were addressed by developing a detailed experimental protocol and using a specially designed 96-well plate layout which incorporated a range of control measurements to assess multiple factors such as nanomaterial interference, pipetting accuracy, cell seeding density, and instrument performance. Detailed data analysis of these control measurements showed that good control of the experiments was attained by all participants in most cases. The main measurement objective of the study was the estimation of a dose response relationship between concentration of the nanoparticles and metabolic activity of the living cells, under several experimental conditions. The dose curve estimation was achieved by imbedding a three parameter logistic curve in a three level Bayesian hierarchical model, accounting for uncertainty due to all known experimental conditions as well as between laboratory variability in a top-down manner. Computation was performed using Markov Chain Monte Carlo methods. The fit of the model was evaluated using Bayesian posterior predictive probabilities and found to be satisfactory.
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Dai, Y.
2014-01-01
This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...... machines, can be applied to dimensional CT measurements. The method is based on repeated measurements carried out on a calibrated master piece. The master piece is a component of a dose engine from an insulin pen. Measurement uncertainties estimated from the repeated measurements of the master piece were...
Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.
Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja
2015-06-01
Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.
Simultaneous Experimentation as a Learning Strategy: Business Model Development Under Uncertainty
Andries, Petra; Debackere, Koenraad; van Looy, Bart
2013-01-01
Ventures operating under uncertainty face challenges defining a sustainable value proposition. Six longitudinal case studies reveal two approaches to business model development: focused commitment and simultaneous experimentation. While focused commitment positively affects initial growth, this
Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O
2015-02-21
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
Leaf area index uncertainty estimates for model-data fusion applications
Andrew D. Richardson; D. Bryan Dail; D.Y. Hollinger
2011-01-01
Estimates of data uncertainties are required to integrate different observational data streams as model constraints using model-data fusion. We describe an approach with which random and systematic uncertainties in optical measurements of leaf area index [LAI] can be quantified. We use data from a measurement campaign at the spruce-dominated Howland Forest AmeriFlux...
A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.
2001-08-24
This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.
Inaccuracy and Uncertainty in Estimates of College Student Suicide Rates.
Schwartz, Allan J.
1980-01-01
Innacurate sample data and uncertain estimates are defined as obstacles to assessing the suicide rate among college students. A standardization of research and reporting services is recommended. (JMF)
Directory of Open Access Journals (Sweden)
Maria Isabel Neria-Gonzále
2015-04-01
Full Text Available The main goal of this work is presents an alternative design of a class of nonlinear controller for tracking trajectories in a class of continuous bioreactor. It is assumed that the reaction rate of the controlled variable is unknown, therefore an uncertainty estimator is proposed to infer this important term, and the observer is coupled with a class of nonlinear feedback. The considered controller contains a class of continuous sigmoid feedback in order to provide smooth closed-loop response of the considered bioreactor. A kinetic model of a sulfate-reducing system is experimentally corroborated and is employed as a benchmark for further modeling and simulation of the continuous operation. A linear PI controller, a class of sliding-mode controller and the proposed one are compared and it is show that the proposed controller yields the best performance. The closed-loop behavior of the process is analyzed via numerical experiments.
Employing Sensitivity Derivatives to Estimate Uncertainty Propagation in CFD
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III
2004-01-01
Two methods that exploit the availability of sensitivity derivatives are successfully employed to predict uncertainty propagation through Computational Fluid Dynamics (CFD) code for an inviscid airfoil problem. An approximate statistical second-moment method and a Sensitivity Derivative Enhanced Monte Carlo (SDEMC) method are successfully demonstrated on a two-dimensional problem. First- and second-order sensitivity derivatives of code output with respect to code input are obtained through an efficient incremental iterative approach. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables); these sensitivity derivatives enable one to formulate first- and second-order Taylor Series approximations for the mean and variance of CFD output quantities. Additionally, incorporation of the first-order sensitivity derivatives into the data reduction phase of a conventional Monte Carlo (MC) simulation allows for improved accuracy in determining the first moment of the CFD output. Both methods are compared to results generated using a conventional MC method. The methods that exploit the availability of sensitivity derivatives are found to be valid when considering small deviations from input mean values.
Estimating Uncertainties in the Multi-Instrument SBUV Profile Ozone Merged Data Set
Frith, Stacey; Stolarski, Richard
2015-01-01
The MOD data set is uniquely qualified for use in long-term ozone analysis because of its long record, high spatial coverage, and consistent instrument design and algorithm. The estimated MOD uncertainty term significantly increases the uncertainty over the statistical error alone. Trends in the post-2000 period are generally positive in the upper stratosphere, but only significant at 1-1.6 hPa. Remaining uncertainties not yet included in the Monte Carlo model are Smoothing Error ( 1 from 10 to 1 hPa) Relative calibration uncertainty between N11 and N17Seasonal cycle differences between SBUV records.
Estimation of Data Uncertainty Adjustment Parameters for Multivariate Earth Rotation Series
Sung, Li-yu; Steppe, J. Alan
1994-01-01
We have developed a maximum likelihood method to estimate a set of data uncertainty adjustment parameters, iccluding scaling factors and additive variances and covariances, for multivariate Earth rotation series.
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...
Ćelap, Ivana; Vukasović, Ines; Juričić, Gordana; Šimundić, Ana-Maria
2017-10-15
The International vocabulary of metrology - Basic and general concepts and associated terms (VIM3, 2.26 measurement uncertainty, JCGM 200:2012) defines uncertainty of measurement as a non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information obtained from performing the measurement. Clinical Laboratory Standards Institute (CLSI) has published a very detailed guideline with a description of sources contributing to measurement uncertainty as well as different approaches for the calculation (Expression of measurement uncertainty in laboratory medicine; Approved Guideline, CLSI C51-A 2012). Many other national and international recommendations and original scientific papers about measurement uncertainty estimation have been published. In Croatia, the estimation of measurement uncertainty is obligatory for accredited medical laboratories. However, since national recommendations are currently not available, each of these laboratories uses a different approach in measurement uncertainty estimation. The main purpose of this document is to describe the minimal requirements for measurement uncertainty estimation. In such way, it will contribute to the harmonization of measurement uncertainty estimation, evaluation and reporting across laboratories in Croatia. This recommendation is issued by the joint Working group for uncertainty of measurement of the Croatian Society for Medical Biochemistry and Laboratory Medicine and Croatian Chamber of Medical Biochemists. The document is based mainly on the recommendations of Australasian Association of Clinical Biochemists (AACB) Uncertainty of Measurement Working Group and is intended for all medical biochemistry laboratories in Croatia.
Estimating the uncertainty of the liquid mass flow using the orifice plate
Directory of Open Access Journals (Sweden)
Golijanek-Jędrzejczyk Anna
2017-01-01
Full Text Available The article presents estimation of measurement uncertainty of a liquid mass flow using the orifice plate. This subject is essential because of the widespread use of this type of flow meters, which renders not only the quantitative estimation but also qualitative results of this type so those measurements are important. To achieve this goal, the authors of the paper propose to use the theory of uncertainty. The article shows the analysis of the measurement uncertainty using two methods: one based on the “Guide to the expression of uncertainty in measurement” (GUM of the International Organization for Standardization with the use of the law of propagation of uncertainty, and the second one using the Monte Carlo numerical method. The paper presents a comparative analysis of the results obtained with both of these methods.
Energy Technology Data Exchange (ETDEWEB)
Reda, I.
2011-07-01
The uncertainty of measuring solar irradiance is fundamentally important for solar energy and atmospheric science applications. Without an uncertainty statement, the quality of a result, model, or testing method cannot be quantified, the chain of traceability is broken, and confidence cannot be maintained in the measurement. Measurement results are incomplete and meaningless without a statement of the estimated uncertainty with traceability to the International System of Units (SI) or to another internationally recognized standard. This report explains how to use International Guidelines of Uncertainty in Measurement (GUM) to calculate such uncertainty. The report also shows that without appropriate corrections to solar measuring instruments (solar radiometers), the uncertainty of measuring shortwave solar irradiance can exceed 4% using present state-of-the-art pyranometers and 2.7% using present state-of-the-art pyrheliometers. Finally, the report demonstrates that by applying the appropriate corrections, uncertainties may be reduced by at least 50%. The uncertainties, with or without the appropriate corrections might not be compatible with the needs of solar energy and atmospheric science applications; yet, this report may shed some light on the sources of uncertainties and the means to reduce overall uncertainty in measuring solar irradiance.
On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo
2016-02-08
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The
Variations of China's emission estimates: response to uncertainties in energy statistics
Hong, Chaopeng; Zhang, Qiang; He, Kebin; Guan, Dabo; Li, Meng; Liu, Fei; Zheng, Bo
2017-01-01
The accuracy of China's energy statistics is of great concern because it contributes greatly to the uncertainties in estimates of global emissions. This study attempts to improve the understanding of uncertainties in China's energy statistics and evaluate their impacts on China's emissions during the period of 1990-2013. We employed the Multi-resolution Emission Inventory for China (MEIC) model to calculate China's emissions based on different official data sets of energy statistics using the same emission factors. We found that the apparent uncertainties (maximum discrepancy) in China's energy consumption increased from 2004 to 2012, reaching a maximum of 646 Mtce (million tons of coal equivalent) in 2011 and that coal dominated these uncertainties. The discrepancies between the national and provincial energy statistics were reduced after the three economic censuses conducted during this period, and converging uncertainties were found in 2013. The emissions calculated from the provincial energy statistics are generally higher than those calculated from the national energy statistics, and the apparent uncertainty ratio (the ratio of the maximum discrepancy to the mean value) owing to energy uncertainties in 2012 took values of 30.0, 16.4, 7.7, 9.2 and 15.6 %, for SO2, NOx, VOC, PM2.5 and CO2 emissions, respectively. SO2 emissions are most sensitive to energy uncertainties because of the high contributions from industrial coal combustion. The calculated emission trends are also greatly affected by energy uncertainties - from 1996 to 2012, CO2 and NOx emissions, respectively, increased by 191 and 197 % according to the provincial energy statistics but by only 145 and 139 % as determined from the original national energy statistics. The energy-induced emission uncertainties for some species such as SO2 and NOx are comparable to total uncertainties of emissions as estimated by previous studies, indicating variations in energy consumption could be an important source of
Varvia, Petri; Rautiainen, Miina; Seppänen, Aku
2017-04-01
Hyperspectral remote sensing data carry information on the leaf area index (LAI) of forests, and thus in principle, LAI can be estimated based on the data by inverting a forest reflectance model. However, LAI is usually not the only unknown in a reflectance model; especially, the leaf spectral albedo and understory reflectance are also not known. If the uncertainties of these parameters are not accounted for, the inversion of a forest reflectance model can lead to biased estimates for LAI. In this paper, we study the effects of reflectance model uncertainties on LAI estimates, and further, investigate whether the LAI estimates could recover from these uncertainties with the aid of Bayesian inference. In the proposed approach, the unknown leaf albedo and understory reflectance are estimated simultaneously with LAI from hyperspectral remote sensing data. The feasibility of the approach is tested with numerical simulation studies. The results show that in the presence of unknown parameters, the Bayesian LAI estimates which account for the model uncertainties outperform the conventional estimates that are based on biased model parameters. Moreover, the results demonstrate that the Bayesian inference can also provide feasible measures for the uncertainty of the estimated LAI.
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Directory of Open Access Journals (Sweden)
Dengsheng Lu
2012-01-01
Full Text Available Landsat Thematic mapper (TM image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.
Entropy Evolution and Uncertainty Estimation with Dynamical Systems
Directory of Open Access Journals (Sweden)
X. San Liang
2014-06-01
Full Text Available This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.
Debate on Uncertainty in Estimating Bathing Water Quality
DEFF Research Database (Denmark)
Larsen, Torben
1992-01-01
Estimating the bathing water quality along the shore near a planned sewage discharge requires data on the source strength of bacteria, the die-off of bacteria and the actual dilution of the sewage. Together these 3 factors give the actual concentration of bacteria on the interesting spots...
Energy Technology Data Exchange (ETDEWEB)
Miller, C.; Little, C.A.
1982-08-01
The purpose is to summarize estimates based on currently available data of the uncertainty associated with radiological assessment models. The models being examined herein are those recommended previously for use in breeder reactor assessments. Uncertainty estimates are presented for models of atmospheric and hydrologic transport, terrestrial and aquatic food-chain bioaccumulation, and internal and external dosimetry. Both long-term and short-term release conditions are discussed. The uncertainty estimates presented in this report indicate that, for many sites, generic models and representative parameter values may be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, especially those from breeder reactors located in sites dominated by complex terrain and/or coastal meteorology, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under these circumstances to reduce this uncertainty. However, even using site-specific information, natural variability and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose or concentration in environmental media following shortterm releases.
Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE)
2011-12-01
Linking the BBN to Existing Cost Estimation Models 38 3.8 Mapping BBN Outputs to the COCOMO Inputs 38 3.9 Monte Carlo Simulation in the Cost...been particularly helpful at various stages of our research, including Michael Cullen , Rob Flowe, Jim Judy and Keith Miller. The same is so for Adam...The method described in this report synthesizes scenario building, Bayesian Belief Network (BBN) modeling, and Monte Carlo simulation into an
Indirect methods of tree biomass estimation and their uncertainties ...
African Journals Online (AJOL)
Depending on data availability (dbh only or both dbh and total tree height) either of the models may be applied to generate satisfactory estimates of tree volume needed for planning and decision-making in management of mangrove forests. The study found an overall mean FF value of 0.65 ± 0.03 (SE), 0.56 ± 0.03 (SE) and ...
Cost estimate of initial SSC experimental equipment
Energy Technology Data Exchange (ETDEWEB)
NONE
1986-06-01
The cost of the initial detector complement at recently constructed colliding beam facilities (or at those under construction) has been a significant fraction of the cost of the accelerator complex. Because of the complexity of large modern-day detectors, the time-scale for their design and construction is comparable to the time-scale needed for accelerator design and construction. For these reasons it is appropriate to estimate the cost of the anticipated detector complement in parallel with the cost estimates of the collider itself. The fundamental difficulty with this procedure is that, whereas a firm conceptual design of the collider does exist, comparable information is unavailable for the detectors. Traditionally, these have been built by the high energy physics user community according to their perception of the key scientific problems that need to be addressed. The role of the accelerator laboratory in that process has involved technical and managerial coordination and the allocation of running time and local facilities among the proposed experiments. It seems proper that the basic spirit of experimentation reflecting the scientific judgment of the community should be preserved at the SSC. Furthermore, the formal process of initiation of detector proposals can only start once the SSC has been approved as a construction project and a formal laboratory administration put in place. Thus an ad hoc mechanism had to be created to estimate the range of potential detector needs, potential detector costs, and associated computing equipment.
Krishnan Kutty, S.; Sekhar, M.; Ruiz, L.; Tomer, S. K.; Bandyopadhyay, S.; Buis, S.; Guerif, M.; Gascuel-odoux, C.
2012-12-01
Groundwater recharge in a semi-arid region is generally low, but could exhibit high spatial variability depending on the soil type and plant cover. The potential recharge (the drainage flux just beneath the root zone) is found to be sensitive to water holding capacity and rooting depth (Rushton, 2003). Simple water balance approaches for recharge estimation often fail to consider the effect of plant cover, growth phases and rooting depth. Hence a crop model based approach might be better suited to assess sensitivity of recharge for various crop-soil combinations in agricultural catchments. Martinez et al. (2009) using a root zone modelling approach to estimate groundwater recharge stressed that future studies should focus on quantifying the uncertainty in recharge estimates due to uncertainty in soil water parameters such as soil layers, field capacity, rooting depth etc. Uncertainty in the parameters may arise due to the uncertainties in retrieved variables (surface soil moisture and leaf area index) from satellite. Hence a good estimate of parameters as well as their uncertainty is essential for a reliable estimate of the potential recharge. In this study we focus on assessing the sensitivity of crop and soil types on the potential recharge by using a generic crop model STICS. The effect of uncertainty in the soil parameters on the estimates of recharge and its uncertainty is investigated. The multi-layer soil water parameters and their uncertainty is estimated by inversion of STICS model using the GLUE approach. Surface soil moisture and LAI either retrieved from microwave remote sensing data or measured in field plots (Sreelash et al., 2012) were found to provide good estimates of the soil water properties and therefore both these data sets were used in this study to estimate the parameters and the potential recharge for a combination of soil-crop systems. These investigations were made in two field experimental catchments. The first one is in the tropical semi
Han, Paul K. J.; Klein, William M. P.; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N.
2011-01-01
Objective To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates, and to identify factors that influence these effects. Methods Two web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as “ambiguity”) of the risk estimates, and was communicated using different representations of confidence intervals. Experiment 1 (n=240) tested the effects of ambiguity (confidence interval vs. point estimate) and representational format (textual vs. visual) on cancer risk perceptions and worry. Potential effect modifiers including personality type (optimism), numeracy, and the information’s perceived credibility were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n=135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Results Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of “ambiguity aversion.” This effect was moderated by representational format and dispositional optimism; textual (vs. visual) format and low (vs. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. Conclusions The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry—consistent with ambiguity aversion—and diminishing the influence of comparative risk information on risk
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by
Effect of Uncertainties in Physical Property Estimates on Process Design - Sensitivity Analysis
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sin, Gürkan
can arise from the experiments itself or from the property models employed. It is important to consider the effect of these uncertainties on the process design in order to assess the quality and reliability of the final design. The main objective of this work is to develop a systematic methodology...... analysis was performed to evaluate the effect of these uncertainties on the process design. The developed methodology was applied to evaluate the effect of uncertainties in the property estimates on design of different unit operations such as extractive distillation, short path evaporator, equilibrium......, the operating conditions, and the choice of the property prediction models, the input uncertainties resulted in significant uncertainties in the final design. The developed methodology was able to: (i) assess the quality of final design; (ii) identify pure component and mixture properties of critical importance...
Statistical characterization of roughness uncertainty and impact on wind resource estimation
DEFF Research Database (Denmark)
Kelly, Mark C.; Ejsing Jørgensen, Hans
2017-01-01
arising from differing wind-observation and turbine-prediction sites; this is done for the case of roughness bias as well as for the general case. For estimation of uncertainty in annual energy production (AEP), we also develop a generalized analytical turbine power curve, from which we derive a relation......In this work we relate uncertainty in background roughness length (z0) to uncertainty in wind speeds, where the latter are predicted at a wind farm location based on wind statistics observed at a different site. Sensitivity of predicted winds to roughness is derived analytically for the industry......-standard European Wind Atlas method, which is based on the geostrophic drag law. We statistically consider roughness and its corresponding uncertainty, in terms of both z0 derived from measured wind speeds as well as that chosen in practice by wind engineers. We show the combined effect of roughness uncertainty...
Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment
Woody encroachment is a globally occurring phenomenon that is thought to contribute significantly to the global carbon (C) sink. The C contribution needs to be estimated at regional and local scales to address large uncertainties present in the global- and continental-scale estimates and guide regio...
A novel method to estimate model uncertainty using machine learning techniques
Solomatine, D.P.; Lal Shrestha, D.
2009-01-01
A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
Schoups, Gerrit; Vrugt, Jasper A.
2010-05-01
Estimation of parameter and predictive uncertainty of hydrologic models usually relies on the assumption of additive residual errors that are independent and identically distributed according to a normal distribution with a mean of zero and a constant variance. Here, we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. Parameter and predictive uncertainty are estimated by Monte Carlo Markov Chain sampling from a generalized likelihood function that accounts for correlation, heteroscedasticity, and non-normality of residual errors. Application to rainfall-runoff modeling using daily data from a humid basin reveals that: (i) residual errors are much better described by a heteroscedastic, first-order auto-correlated error model with a Laplacian density characterized by heavier tails than a Gaussian density, and (ii) proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and more physically realistic parameter estimates that are less sensitive to the particular time period used for inference. The latter is especially useful for regionalization and extrapolation of parameter values to ungauged basins. Application to daily rainfall-runoff data from a semi-arid basin shows that allowing skew in the error distribution yields improved estimates of predictive uncertainty when flows are close to zero.
Spectrum of shear modes in the neutron-star crust: Estimating the nuclear-physics uncertainties
Tews, Ingo
2016-01-01
I construct a model of the inner crust of neutron stars using interactions from chiral effective field theory (EFT) in order to calculate its equation of state (EOS), shear properties, and the spectrum of crustal shear modes. I systematically study uncertainties associated with the nuclear physics input, the crust composition, and neutron entrainment, and estimate their impact on crustal shear properties and the shear-mode spectrum. I find that the uncertainties originate mainly in two source...
Uncertainty Estimation due to Geometrical Imperfection and Wringing in Calibration of End Standards
Salah H. R. Ali; Ihab H. Naeim
2013-01-01
Uncertainty in gauge block measurement depends on three major areas, thermal effects, dimension metrology system that includes measurement strategy, and end standard surface perfection grades. In this paper, we focus precisely on estimating the uncertainty due to the geometrical imperfection of measuring surfaces and wringing gab in calibration of end standards grade 0. Optomechanical system equipped with Zygo measurement interferometer (ZMI-1000A) and AFM technique have been employed. A nove...
Lista, L
2004-01-01
A procedure to include the uncertainty on the background estimate for upper limit calculations using Poissonian sampling is presented for the case where a Gaussian assumption on the uncertainty can be made. Under that hypothesis an analytic expression of the likelihood is derived which can be written in terms of polynomials defined by recursion. This expression may lead to a significant speed up of computing applications that extract the upper limits using Toy Monte Carlo.
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with
Stream flow - its estimation, uncertainty and interaction with groundwater and floodplains
DEFF Research Database (Denmark)
Poulsen, Jane Bang
, floodplain hydraulics and sedimentation patterns has been investigated along a restored channel section of Odense stream, Denmark. Collected samples of deposited sediment, organic matter and phosphorus on the floodplain were compared with results from a 2D dynamic flow model. Three stage dependent flow...... examines stream flow – its estimation, uncertainty and interaction with groundwater and floodplains. Impacts of temporally varying hydraulic flow conditions on uncertainties in stream flow estimation have been investigated in the Holtum and Skjern streams, Denmark. Continuous monitoring of stream flow...... velocities was used to detect hydraulic changes in stream roughness and geometry. A stage-velocity-discharge (QHV) relation has been developed which is a new approach for hydrograph estimation that allows for continuous adjustment of the hydrograph according to roughness changes in the stream. Uncertainties...
Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik
2012-01-01
A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.
Climate data induced uncertainty in model-based estimations of terrestrial primary productivity
Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko
2017-06-01
Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due
Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.
2014-01-01
Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.
Directory of Open Access Journals (Sweden)
K. J. Franz
2011-11-01
Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.
Franz, K. J.; Hogue, T. S.
2011-11-01
The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP) systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE), and the Shuffle Complex Evolution Metropolis (SCEM). Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA) model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the limits of the proposed approximation, the conventional Kriging estimates are shown to be biased for all variograms, the bias depending upon the second order derivatives with respect to the parameters times the variance-covariance matrix of the parameter estimates. A new Maximum Likelihood (ML estimator for semi-variogram parameters in ordinary Kriging, based upon the assumption of a multi-normal distribution of the Kriging cross-validation errors, is introduced as a mean for the estimation of the parameter variance-covariance matrix. Keywords: Kriging, maximum likelihood, parameter estimation, uncertainty
Directory of Open Access Journals (Sweden)
Stephen M Petrie
Full Text Available For in vivo studies of influenza dynamics where within-host measurements are fit with a mathematical model, infectivity assays (e.g. 50% tissue culture infectious dose; TCID50 are often used to estimate the infectious virion concentration over time. Less frequently, measurements of the total (infectious and non-infectious viral particle concentration (obtained using real-time reverse transcription-polymerase chain reaction; rRT-PCR have been used as an alternative to infectivity assays. We investigated the degree to which measuring both infectious (via TCID50 and total (via rRT-PCR viral load allows within-host model parameters to be estimated with greater consistency and reduced uncertainty, compared with fitting to TCID50 data alone. We applied our models to viral load data from an experimental ferret infection study. Best-fit parameter estimates for the "dual-measurement" model are similar to those from the TCID50-only model, with greater consistency in best-fit estimates across different experiments, as well as reduced uncertainty in some parameter estimates. Our results also highlight how variation in TCID50 assay sensitivity and calibration may hinder model interpretation, as some parameter estimates systematically vary with known uncontrolled variations in the assay. Our techniques may aid in drawing stronger quantitative inferences from in vivo studies of influenza virus dynamics.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
Energy Technology Data Exchange (ETDEWEB)
Poppeliers, Christian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In this report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.
Uncertainty of feedback and state estimation determines the speed of motor adaptation
Directory of Open Access Journals (Sweden)
Kunlin Wei
2010-05-01
Full Text Available Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Directory of Open Access Journals (Sweden)
Janet L. Rachlow
2013-08-01
Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf
2014-09-01
Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters.
Energy Technology Data Exchange (ETDEWEB)
Heath, Garvin [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Warner, Ethan [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Steinberg, Daniel [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Brandt, Adam [Stanford Univ., CA (United States)
2015-08-01
A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH_{4} emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain.
Modelling and Measurement Uncertainty Estimation for Integrated AFM-CMM Instrument
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Bariani, Paolo; De Chiffre, Leonardo
2005-01-01
This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows compensa......This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows...... uncertainty of 0.8% was achieved for the case of surface mapping of 1.2*1.2 mm2 consisting of 49 single AFM scanned areas....
Directory of Open Access Journals (Sweden)
Adamczak Stanisław
2014-08-01
Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.
New estimates of silicate weathering rates and their uncertainties in global rivers
Moon, Seulgi; Chamberlain, C. P.; Hilley, G. E.
2014-06-01
This study estimated the catchment- and global-scale weathering rates of silicate rocks from global rivers using global compilation datasets from the GEMS/Water and HYBAM. These datasets include both time-series of chemical concentrations of major elements and synchronous discharge. Using these datasets, we first examined the sources of uncertainties in catchment and global silicate weathering rates. Then, we proposed future sampling strategies and geochemical analyses to estimate accurate silicate weathering rates in global rivers and to reduce uncertainties in their estimates. For catchment silicate weathering rates, we considered uncertainties due to sampling frequency and variability in river discharge, concentration, and attribution of weathering to different chemical sources. Our results showed that uncertainties in catchment-scale silicate weathering rates were due mostly to the variations in discharge and cation fractions from silicate substrates. To calculate unbiased silicate weathering rates accounting for the variations from discharge and concentrations, we suggest that at least 10 and preferably ∼40 temporal chemical data points with synchronous discharge from each river are necessary. For the global silicate weathering rate, we examined uncertainties from infrequent sampling within an individual river, the extrapolation from limited rivers to a global flux, and the inverse model selections for source differentiation. For this weathering rate, we found that the main uncertainty came from the extrapolation to the global flux and the model configurations of source differentiation methods. This suggests that to reduce the uncertainties in the global silicate weathering rates, coverage of synchronous datasets of river chemistry and discharge to rivers from tectonically active regions and volcanic provinces must be extended, and catchment-specific silicate end-members for those rivers must be characterized. With current available synchronous datasets, we
Elbers, J.A.; Jacobs, C.M.J.; Kruijt, B.; Jans, W.W.P.; Moors, E.J.
2011-01-01
Values for annual NEP of micrometeorological tower sites are usually published without an estimate of associated uncertainties. Few authors quantify total uncertainty of annual NEP. Moreover, different methods to assess total uncertainty are applied, usually addressing only one aspect of the
Xie, Y.; Cook, P. G.; Simmons, C. T.; Partington, D.; Crosbie, R.; Batelaan, O.
2016-12-01
Coupled soil-vegetation-atmosphere models have become increasingly popular for estimating groundwater recharge, because of the integration of carbon, energy and water balances. The carbon and energy balances act to constrain the water balance and as a result should reduce the uncertainty of groundwater recharge estimates. However, the addition of carbon and energy balances also introduces a large number of plant physiological parameters which complicates the estimation of groundwater recharge. Moreover, this method often relies on existing pedotransfer functions to derive soil water retention curve parameters and saturated hydraulic conductivity from soil attribute data. The choice of a pedotransfer function is usually subjective and several pedotransfer functions may be fit for the purpose. These different pedotransfer functions (and thus the uncertainty of soil water retention curve parameters and saturated hydraulic conductivity) are likely to increase the prediction uncertainty of recharge estimates. In this study, we aim to assess the potential uncertainty of groundwater recharge when using a coupled soil-vegetation-atmosphere modelling method. The widely used WAter Vegetation Energy and Solute (WAVES) modelling code was used to perform simulations of different water balances in order to estimate groundwater recharge in the Campaspe catchment in southeast Australia. We carefully determined the ranges of the vegetation parameters based upon a literature review. We also assessed a number of existing pedotransfer functions and selected the four most appropriate. Then the Monte Carlo analysis approach was employed to examine potential uncertainties introduced by different types of errors. Preliminary results suggest that for a mean rainfall of about 500 mm/y and annual pasture vegetation, the estimated recharge may range from 10 to 150 mm/y due to the uncertainty in vegetation parameters. This upper bound of the recharge range may double to 300 mm/y if different
Application of best estimate plus uncertainty in review of research reactor safety analysis
Directory of Open Access Journals (Sweden)
Adu Simon
2015-01-01
Full Text Available To construct and operate a nuclear research reactor, the licensee is required to obtain the authorization from the regulatory body. One of the tasks of the regulatory authority is to verify that the safety analysis fulfils safety requirements. Historically, the compliance with safety requirements was assessed using a deterministic approach and conservative assumptions. This provides sufficient safety margins with respect to the licensing limits on boundary and operational conditions. Conservative assumptions were introduced into safety analysis to account for the uncertainty associated with lack of knowledge. With the introduction of best estimate computational tools, safety analyses are usually carried out using the best estimate approach. Results of such analyses can be accepted by the regulatory authority only if appropriate uncertainty evaluation is carried out. Best estimate computer codes are capable of providing more realistic information on the status of the plant, allowing the prediction of real safety margins. The best estimate plus uncertainty approach has proven to be reliable and viable of supplying realistic results if all conditions are carefully followed. This paper, therefore, presents this concept and its possible application to research reactor safety analysis. The aim of the paper is to investigate the unprotected loss-of-flow transients "core blockage" of a miniature neutron source research reactor by applying best estimate plus uncertainty methodology. The results of our calculations show that the temperatures in the core are within the safety limits and do not pose any significant threat to the reactor, as far as the melting of the cladding is concerned. The work also discusses the methodology of the best estimate plus uncertainty approach when applied to the safety analysis of research reactors for licensing purposes.
Energy Technology Data Exchange (ETDEWEB)
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
Uncertainty of mass discharge estimation from contaminated sites at screening level
DEFF Research Database (Denmark)
Thomsen, Nanna Isbak; Troldborg, M.; McKnight, Ursula S.
that only the sites that present an actual risk are further investigated and perhaps later remediated. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from poorly characterised contaminant point sources on the local scale. Techniques for estimating dynamic uncertainty...... are not currently available for such sites. Mass discharge estimates (mass/time) have been proposed as a useful metric in risk assessment, because they provide an estimate of the impact of a contaminated site on a given water resource and allow for the comparison of impact between different sites. But mass...... (perchloroethylene) that has contaminated a clay till aquitard overlaying a limestone aquifer. The nature of the geology and the exact shape of the source are unknown. The decision factors in the Bayesian belief network for the site are presented. Model output is shown in the form of time varying mass discharge...
Certain uncertainty: using pointwise error estimates in super-resolution microscopy
Lindén, Martin; Amselem, Elias; Elf, Johan
2016-01-01
Point-wise localization of individual fluorophores is a critical step in super-resolution microscopy and single particle tracking. Although the methods are limited by the accuracy in localizing individual flourophores, this point-wise accuracy has so far only been estimated by theoretical best case approximations, disregarding for example motional blur, out of focus broadening of the point spread function and time varying changes in the fluorescence background. Here, we show that pointwise localization uncertainty can be accurately estimated directly from imaging data using a Laplace approximation constrained by simple mircoscope properties. We further demonstrate that the estimated localization uncertainty can be used to improve downstream quantitative analysis, such as estimation of diffusion constants and detection of changes in molecular motion patterns. Most importantly, the accuracy of actual point localizations in live cell super-resolution microscopy can be improved beyond the information theoretic lo...
Managing Uncertainty in ERP Project Estimation Practice: An Industrial Case Study
Daneva, Maia; Jedlitschka, A.; Salo, O.
2008-01-01
Uncertainty is a crucial element in managing projects. This paper’s aim is to shed some light into the issue of uncertain context factors when estimating the effort needed for implementing enterprise resource planning (ERP) projects. We outline a solution approach to this issue. It complementarily
Balancing uncertainty of context in ERP project estimation: an approach and a case study
Daneva, Maia
2010-01-01
The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -
Impact of Uncertainty on Non-Medical Professionals' Estimates of Sexual Abuse Probability.
Fargason, Crayton A., Jr.; Peralta-Carcelen, Myriam C.; Fountain, Kathleen E.; Amaya, Michelle I.; Centor, Robert
1997-01-01
Assesses how an educational intervention describing uncertainty in child sexual-abuse assessments affects estimates of sexual abuse probability by non-physician child-abuse professionals (CAPs). Results, based on evaluations of 89 CAPs after the intervention, indicate they undervalued medical-exam findings and had difficulty adjusting for medical…
Estimation of uncertainties in the performance indices of an oxidation ditch benchmark
Abusam, A.; Keesman, K.J.; Spanjers, H.; Straten, van G.; Meinema, K.
2002-01-01
Estimation of the influence of different sources of uncertainty is very important in obtaining a thorough evaluation or a fair comparison of the various control strategies proposed for wastewater treatment plants. This paper illustrates, using real data obtained from a full-scale oxidation ditch
Uncertainty in peat volume and soil carbon estimated using ground-penetrating radar and probing
Andrew D. Parsekian; Lee Slater; Dimitrios Ntarlagiannis; James Nolan; Stephen D. Sebestyen; Randall K. Kolka; Paul J. Hanson
2012-01-01
Estimating soil C stock in a peatland is highly dependent on accurate measurement of the peat volume. In this study, we evaluated the uncertainty in calculations of peat volume using high-resolution data to resolve the three-dimensional structure of a peat basin based on both direct (push probes) and indirect geophysical (ground-penetrating radar) measurements. We...
Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S
2013-10-01
Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.
Kim, Ho Sung
2013-01-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…
Sebacher, B.; Hanea, R.G.; Heemink, A.
2013-01-01
In the past years, many applications of historymatching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Measuring Cross-Section and Estimating Uncertainties with the fissionTPC
Energy Technology Data Exchange (ETDEWEB)
Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-30
The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.
C. J. O'Donnell; Woodland, A D
1995-01-01
A model of producer behavior, which explicitly accounts for both output price and production uncertainty, is formulated and estimated. If the production technology is multiplicatively separable in its deterministic and stochastic components, then the expected utility maximization problem implies cost minimization for planned or expected output. Consequently, our empirical model of three lamb- and wool-producing sectors in Australia involves the estimation of a system of input cost share and c...
Botto, A.; Ganora, D.; Laio, F.; Claps, P.
2012-04-01
Traditionally, flood frequency analysis has been used to assess the design discharge for hydraulic infrastructures. Unfortunately, this method involves uncertainties, be they of random or epistemic nature. Despite some success in measuring uncertainty, e.g. by means of numerical simulations, exhaustive methods for their evaluation are still an open challenge to the scientific community. The proposed method aims to improve the standard models for design flood estimation, considering the hydrological uncertainties inherent with the classic flood frequency analysis, in combination with cost-benefit analysis. Within this framework, two of the main issues related to flood risk are taken into account: on the one hand statistical flood frequency analysis is complemented with suitable uncertainty estimates; on the other hand the economic value of the flood-prone land is considered, as well as the economic losses in case of overflow. Consider a case where discharge data are available at the design site: the proposed procedure involves the following steps: (i) for a given return period T the design discharge is obtained using standard statistical inference (for example, using the GEV distribution and the method of L- moments to estimate the parameters); (ii) Monte Carlo simulations are performed to quantify the parametric uncertainty related to the design-flood estimator: 10000 triplets of L-moment values are randomly sampled from their relevant multivariate distribution, and 10000 values of the T-year discharge are obtained ; (iii) a procedure called the least total expected cost (LTEC) design approach is applied as described hereafter: linear cost and damage functions are proposed so that the ratio between the slope of the damage function and the slope of the cost function is equal to T. The expected total cost (sum of the cost plus the expected damage) is obtained for each of the 10000 design value estimators, and the estimator corresponding to the minimum total cost is
Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals
Energy Technology Data Exchange (ETDEWEB)
Rypdal, Kristin; Baritz, Rainer
2002-07-01
Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.
Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas
Covián, E.; Puente, V.; Casero, M.
2017-10-01
The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying
Hauge, Ingrid Helen Ryste; Olerud, Hilde Merete
2012-01-01
The aim of this study was to reflect on the estimation of the mean glandular dose for women in Norway aged 50–69 y. Estimation of mean glandular dose (MGD) has been conducted by applying the method of Dance et al. (1990, 2000, 2009). Uncertainties in the thickness of approximately ±10 mm adds uncertainties in the MGD of approximately ±10 %, and uncertainty in the glandularity of ±0 % will lead to an uncertainty in the MGD of ±4 %. However, the inherent uncertainty in the air kerma, given by t...
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many....... In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit...
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Estimation of uncertainty in measurement of alkalinity using the GTC 51 guide
Alzate Rodríguez, Edwin Jhovany
2008-01-01
Este documento proporciona una guía para la estimación de la incertidumbre en el análisis de la alcalinidad en el agua, basada en la metodología de la ISO “Guía para la expresión de la Incertidumbre de Medición” (GTC 51). This document gives guidance for the estimation of uncertainty in the analysis of alkalinity in water, based on the approach taken in the ISO “Guide to the Expression of Uncertainty in Measurement”(GTC 51).
Freni, Gabriele; Mannina, Giorgio
In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-01
In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.
DEFF Research Database (Denmark)
Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.
2012-01-01
propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same...... consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation......Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1...
Directory of Open Access Journals (Sweden)
S. P. Urbanski
2011-12-01
Full Text Available Biomass burning emission inventories serve as critical input for atmospheric chemical transport models that are used to understand the role of biomass fires in the chemical composition of the atmosphere, air quality, and the climate system. Significant progress has been achieved in the development of regional and global biomass burning emission inventories over the past decade using satellite remote sensing technology for fire detection and burned area mapping. However, agreement among biomass burning emission inventories is frequently poor. Furthermore, the uncertainties of the emission estimates are typically not well characterized, particularly at the spatio-temporal scales pertinent to regional air quality modeling. We present the Wildland Fire Emission Inventory (WFEI, a high resolution model for non-agricultural open biomass burning (hereafter referred to as wildland fires, WF in the contiguous United States (CONUS. The model combines observations from the MODerate Resolution Imaging Spectroradiometer (MODIS sensors on the Terra and Aqua satellites, meteorological analyses, fuel loading maps, an emission factor database, and fuel condition and fuel consumption models to estimate emissions from WF.
WFEI was used to estimate emissions of CO (ECO and PM_{2.5} (EPM_{2.5} for the western United States from 2003–2008. The uncertainties in the inventory estimates of ECO and EPM_{2.5} (u_{ECO} and u_{EPM2.5}, respectively have been explored across spatial and temporal scales relevant to regional and global modeling applications. In order to evaluate the uncertainty in our emission estimates across multiple scales we used a figure of merit, the half mass uncertainty, ũ_{EX} (where X = CO or PM_{2.5}, defined such that for a given aggregation level 50% of total emissions occurred from elements with u_{EX} ũ_{EX}. The
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
Improving uncertainty estimation in urban hydrological modeling by statistically describing bias
Directory of Open Access Journals (Sweden)
D. Del Giudice
2013-10-01
Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on
Legchenko, Anatoly; Comte, Jean-Christophe; Ofterdinger, Ulrich; Vouillamoz, Jean-Michel; Lawson, Fabrice Messan Amen; Walsh, John
2017-09-01
We propose a simple and robust approach for investigating uncertainty in the results of inversion in geophysics. We apply this approach to inversion of Surface Nuclear Magnetic Resonance (SNMR) data, which is also known as Magnetic Resonance Sounding (MRS). Solution of this inverse problem is known to be non-unique. We inverse MRS data using the well-known Tikhonov regularization method, which provides an optimal solution as a trade-off between the stability and accuracy. Then, we perturb this model by random values and compute the fitting error for the perturbed models. The magnitude of these perturbations is limited by the uncertainty estimated with the singular value decomposition (SVD) and taking into account experimental errors. We use 106 perturbed models and show that the large majority of these models, which have all the water content within the variations given by the SVD estimate, do not fit data with an acceptable accuracy. Thus, we may limit the solution space by only the equivalent inverse models that fit data with the accuracy close to that of the initial inverse model. For representing inversion results, we use three equivalent solutions instead of the only one: the ;best; solution given by the regularization or other inversion technic and the extreme variations of this solution corresponding to the equivalent models with the minimum and the maximum volume of water. For demonstrating our approach, we use synthetic data sets and experimental data acquired in the framework of investigation of a hard rock aquifer in the Ireland (County Donegal).
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Estimation of the measurement uncertainty of methamphetamine and amphetamine in hair analysis.
Lee, Sooyeun; Park, Yonghoon; Yang, Wonkyung; Han, Eunyoung; Choe, Sanggil; Lim, Miae; Chung, Heesun
2009-03-10
The measurement uncertainties (MUs) were estimated for the determination of methamphetamine (MA) and its main metabolite, amphetamine (AP) at the low concentrations (around the cut-off value of MA) in human hair according to the recommendations of the EURACHEM/CITAC Guide and "Guide to the expression of uncertainty in measurement (GUM)". MA and AP were extracted by agitating hair with 1% HCl in methanol, followed by derivatization and quantification using GC-MS. The major components contributing to their uncertainties were the amount of MA or AP in the test sample, the weight of the test sample and the method precision, based on the equation to calculate the mesurand from intermediate values. Consequently, the concentrations of MA and AP in the hair sample with their expanded uncertainties were 0.66+/-0.05 and 1.01+/-0.06 ng/mg, respectively, which were acceptable to support the successful application of the analytical method. The method precision and the weight of the hair sample gave the largest contribution to the overall combined uncertainties of MA and AP, for each.
Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.
2013-12-01
This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.
Briggs, Andrew H; Weinstein, Milton C; Fenwick, Elisabeth A L; Karnon, Jonathan; Sculpher, Mark J; Paltiel, A David
2012-01-01
A model's purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value of information analysis. The article also makes extensive recommendations around the reporting of uncertainty, in terms of both deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Uncertainties in peat volume and soil carbon estimated using ground penetrating radar and probing
Energy Technology Data Exchange (ETDEWEB)
Parsekian, Andrew D. [Rutgers University; Slater, Lee [Rutgers University; Ntarlagiannis, Dimitrios [Rutgers University; Nolan, James [Rutgers University; Sebestyen, Stephen D [USDA Forest Service, Grand Rapids, MN; Kolka, Randall K [USDA Forest Service, Grand Rapids, MN; Hanson, Paul J [ORNL
2012-01-01
We evaluate the uncertainty in calculations of peat basin volume using high-resolution data . to resolve the three-dimensional structure of a peat basin using both direct (push probes) and indirect geophysical (ground penetrating radar) measurements. We compared volumetric estimates from both approaches with values from literature. We identified subsurface features that can introduce uncertainties into direct peat thickness measurements including the presence of woody peat and soft clay or gyttja. We demonstrate that a simple geophysical technique that is easily scalable to larger peatlands can be used to rapidly and cost effectively obtain more accurate and less uncertain estimates of peat basin volumes critical to improving understanding of the total terrestrial carbon pool in peatlands.
Inferring uncertainty from interval estimates: Effects of alpha level and numeracy
Directory of Open Access Journals (Sweden)
Luke F. Rinne
2013-05-01
Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Computer Aided Molecular Design (CAMD) is an important tool to generate, test and evaluate promising chemical products. CAMD can be used in thermodynamic cycle for the design of pure component or mixture working fluids in order to improve the heat transfer capacity of the system. The safety...... assessment of novel working fluids relies on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. For novel working fluid candidates experimental values are not available for the safety...... analysis. In this case property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible [1]. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence...
Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng
2015-03-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way
Experimental Wind Field Estimation and Aircraft Identification
Condomines, Jean-Philippe; Bronz, Murat; Hattenberger, Gautier; Erdelyi, Jean-François
2015-01-01
International audience; The presented work is focusing on the wind estimation and airframe identification based on real flight experiments as part of the SkyScanner project. The overall objective of this project is to estimate the local wind field in order to study the formation of cumulus-type clouds with a fleet of autonomous mini-UAVs involving many aspects including flight control and energy harvesting. For this purpose, a small UAV has been equipped with airspeed and angle of attack sens...
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
Villarini, Gabriele; Krajewski, Witold F.
2010-01-01
It is well acknowledged that there are large uncertainties associated with radar-based estimates of rainfall. Numerous sources of these errors are due to parameter estimation, the observational system and measurement principles, and not fully understood physical processes. Propagation of these uncertainties through all models for which radar-rainfall are used as input (e.g., hydrologic models) or as initial conditions (e.g., weather forecasting models) is necessary to enhance the understanding and interpretation of the obtained results. The aim of this paper is to provide an extensive literature review of the principal sources of error affecting single polarization radar-based rainfall estimates. These include radar miscalibration, attenuation, ground clutter and anomalous propagation, beam blockage, variability of the Z- R relation, range degradation, vertical variability of the precipitation system, vertical air motion and precipitation drift, and temporal sampling errors. Finally, the authors report some recent results from empirically-based modeling of the total radar-rainfall uncertainties. The bibliography comprises over 200 peer reviewed journal articles.
Directory of Open Access Journals (Sweden)
Carlos Peña
Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ ∑ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ ∑ NEE ˜ 4.0-13.5 g C
Estimating uncertainty and change in flood frequency for an urban Swedish catchment
Westerberg, I.; Persson, T.
2013-12-01
Floods are extreme events that occur rarely, which means that there are relatively few data of weather and flow conditions during flooding episodes for estimation of flood frequency, and that such estimates are necessarily uncertain. There is even less data available for estimation of changes in flood frequency as a result of changes in land use, climate or the morphometry of the watercourse. In this study we used a combination of monitoring and modelling to overcome the lack of reliable discharge data and allow us to characterise the flooding problems in the highly urbanised Riseberga Creek catchment in eastern Malmö, Sweden, as well as investigating their potential change in the future. The study is part of the GreenClimeAdapt project, in which local stakeholders and researchers work with finding and demonstrating solutions to the present flooding problems in the catchment as well as adaptation to future change. A high-resolution acoustic doppler discharge gauge was installed in the creek and a hydrologic model was set up to extend this short record for estimation of flood frequency. Discharge uncertainty was estimated based on a stage-discharge analysis and accounted for in model calibration together with uncertainties in the model parameterisation. The model was first used to study the flow variability during the 16 years with available climate input data. Then it was driven with long-term climate realisations from a statistical weather generator to estimate flood frequency for present climate and for future climate and land-use scenarios through continuous simulation. The uncertainty in the modelled flood-frequency for present climate was found to be important, and could partly be reduced in the future using longer monitoring records containing additional and higher flood episodes. The climate and land-use change scenarios are mainly useful for sensitivity analysis of different adaptation measures that can be taken to reduce the flooding problems, for which
Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise
2017-10-01
The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.
Sherriff, Sophie; Rowan, John; Franks, Stewart; Walden, John; Melland, Alice; Jordan, Phil; Fenton, Owen; hUallacháin, Daire Ó.
2014-05-01
Sediment fingerprinting techniques are being applied more frequently to inform soil and water management issues. Identification of sediment source areas and assessment of their relative contributions are essential in targeting cost-effective mitigation strategies. Sediment fingerprinting utilises natural sediment properties (e.g. chemical, magnetic, radiometric) to trace the contributions from different source areas by 'unmixing' a catchment outlet sample back to its constituent sources. Early qualitative approaches have been superseded by quantitative methodologies using multiple (composite) tracers coupled with linear programming. Despite the inclusion of fingerprinting results in environmental management strategies, techniques are subject to potentially significant uncertainties. Intra-source heterogeneity, although widely recognised as a source of uncertainty, is difficult to address, particularly in large study catchments, or where source collection is restricted. Inadequate characterisation may result in the translation of significant uncertainties to a group fingerprint and onward to contribution estimates. Franks and Rowan (2000) developed an uncertainty inclusive un-mixing model (FR2000+) based on Bayesian Monte-Carlo methods. Source area contributions are reported with confidence intervals which incorporate sampling and un-mixing uncertainties. Consequently the impact of uncertainty on the reliability of predictions can be considered. The aim of this study is to determine the impact of source area sampling resolution and spatial complexity on source area contribution estimates and their relative uncertainty envelope. High resolution source area sampling was conducted in a 10 km2 intensive grassland catchment in Co. Wexford, Ireland, according to potential field and non-field sources. Seven potential source areas were sampled; channel banks (n=55), road verges (n=44), topsoils (n=35), subsoils (n=32), tracks (n=6), drains (n=2) and eroding ditches (n=5
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data.
Shao, Kan; Gift, Jeffrey S
2014-01-01
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the "hybrid" method proposed by Crump, two strategies of BMA, including both "maximum likelihood estimation based" and "Markov Chain Monte Carlo based" methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose-response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose-response data. © 2013 Society for Risk Analysis.
The impact of a and b value uncertainty on loss estimation in the reinsurance industry
Directory of Open Access Journals (Sweden)
R. Streit
2000-06-01
Full Text Available In the reinsurance industry different probabilistic models are currently used for seismic risk analysis. A credible loss estimation of the insured values depends on seismic hazard analysis and on the vulnerability functions of the given structures. Besides attenuation and local soil amplification, the earthquake occurrence model (often represented by the Gutenberg and Richter relation is a key element in the analysis. However, earthquake catalogues are usually incomplete, the time of observation is too short and the data themselves contain errors. Therefore, a and b values can only be estimated with uncertainties. The knowledge of their variation provides a valuable input for earthquake risk analysis, because they allow the probability distribution of expected losses (expressed by Average Annual Loss (AAL to be modelled. The variations of a and b have a direct effect on the estimated exceeding probability and consequently on the calculated loss level. This effect is best illustrated by exceeding probability versus loss level and AAL versus magnitude graphs. The sensitivity of average annual losses due to different a to b ratios and magnitudes is obvious. The estimation of the variation of a and b and the quantification of the sensitivity of calculated losses are fundamental for optimal earthquake risk management. Ignoring these uncertainties means that risk management decisions neglect possible variations of the earthquake loss estimations.
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data......, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
D'Agostino, G.; Mana, G.; Oddone, M.; Prata, M.; Bergamaschi, L.; Giordani, L.
2014-06-01
We investigated the use of neutron activation to estimate the 30Si mole fraction of the ultra-pure silicon material highly enriched in 28Si for the measurement of the Avogadro constant. Specifically, we developed a relative method based on instrumental neutron activation analysis and using a natural-Si sample as a standard. To evaluate the achievable uncertainty, we irradiated a 6 g sample of a natural-Si material and modelled experimentally the signal that would be produced by a sample of the 28Si-enriched material of similar mass and subjected to the same measurement conditions. The extrapolation of the expected uncertainty from the experimental data indicates that a measurement of the 30Si mole fraction of the 28Si-enriched material might reach a 4% relative combined standard uncertainty.
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Towards rapid uncertainty estimation in linear finite fault inversion with positivity constraints
Benavente, R. F.; Cummins, P. R.; Sambridge, M.; Dettmer, J.
2015-12-01
Rapid estimation of the slip distribution for large earthquakes can assist greatly during the early phases of emergency response. These estimates can be used for rapid impact assessment and tsunami early warning. While model parameter uncertainties can be crucial for meaningful interpretation of such slip models, they are often ignored. Since the finite fault problem can be posed as a linear inverse problem (via the multiple time window method), an analytic expression for the posterior covariance matrix can be obtained, in principle. However, positivity constraints are often employed in practice, which breaks the assumption of a Gaussian posterior probability density function (PDF). To our knowledge, two solutions to this issue exist in the literature: 1) Not using positivity constraints (may lead to exotic slip patterns) or 2) to use positivity constraints but apply Bayesian sampling for the posterior. The latter is computationally expensive and currently unsuitable for rapid inversion. In this work, we explore an alternative approach in which we realize positivity by imposing a prior such that the log of each subfault scalar moment are smoothly distributed on the fault surface. This results in each scalar moment to be intrinsically non-negative while the posterior PDF can still be approximated as Gaussian. While the inversion is not linear anymore, we show that the most probable solution can be found by iterative methods which are less computationally expensive than numerical sampling of the posterior. In addition, the posterior covariance matrix (which provides uncertainties) can be estimated from the most probable solution, using an analytic expression for the Hessian of the cost function. We study this approach for both synthetic and observed W-phase data and the results suggest that a first order estimation of the uncertainty in the slip model can be obtained, therefore aiding in the interpretation of the slip distribution estimate.
Influence of parameter estimation uncertainty in Kriging: Part 2 - Test and case study applications
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available The theoretical approach introduced in Part 1 is applied to a numerical example and to the case of yearly average precipitation estimation over the Veneto Region in Italy. The proposed methodology was used to assess the effects of parameter estimation uncertainty on Kriging estimates and on their estimated error variance. The Maximum Likelihood (ML estimator proposed in Part 1, was applied to the zero mean deviations from yearly average precipitation over the Veneto Region in Italy, obtained after the elimination of a non-linear drift with elevation. Three different semi-variogram models were used, namely the exponential, the Gaussian and the modified spherical, and the relevant biases as well as the increases in variance have been assessed. A numerical example was also conducted to demonstrate how the procedure leads to unbiased estimates of the random functions. One hundred sets of 82 observations were generated by means of the exponential model on the basis of the parameter values identified for the Veneto Region rainfall problem and taken as characterising the true underlining process. The values of parameter and the consequent cross-validation errors, were estimated from each sample. The cross-validation errors were first computed in the classical way and then corrected with the procedure derived in Part 1. Both sets, original and corrected, were then tested, by means of the Likelihood ratio test, against the null hypothesis of deriving from a zero mean process with unknown covariance. The results of the experiment clearly show the effectiveness of the proposed approach. Keywords: yearly rainfall, maximum likelihood, Kriging, parameter estimation uncertainty
Theoretical and experimental estimates of the Peierls stress
CSIR Research Space (South Africa)
Nabarro, FRN
1997-03-01
Full Text Available an error of a factor of 2 in this exponent in Peierls's original estimate. A revised estimate by Huntington introduced a further factor of 2. Three experimental estimates are available, from the Bordoni peaks (which agrees with the Huntington theory), from...
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Developing first time-series of land surface temperature from AATSR with uncertainty estimates
Ghent, Darren; Remedios, John
2013-04-01
Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3
Till, John E.; Beck, Harold L.; Aanenson, Jill W.; Grogan, Helen A.; Mohler, H. Justin; Mohler, S. Shawn; Voillequé, Paul G.
2014-01-01
Methods were developed to calculate individual estimates of exposure and dose with associated uncertainties for a sub-cohort (1,857) of 115,329 military veterans who participated in at least one of seven series of atmospheric nuclear weapons tests or the TRINITY shot carried out by the United States. The tests were conducted at the Pacific Proving Grounds and the Nevada Test Site. Dose estimates to specific organs will be used in an epidemiological study to investigate leukemia and male breast cancer. Previous doses had been estimated for the purpose of compensation and were generally high-sided to favor the veteran's claim for compensation in accordance with public law. Recent efforts by the U.S. Department of Defense (DOD) to digitize the historical records supporting the veterans’ compensation assessments make it possible to calculate doses and associated uncertainties. Our approach builds upon available film badge dosimetry and other measurement data recorded at the time of the tests and incorporates detailed scenarios of exposure for each veteran based on personal, unit, and other available historical records. Film badge results were available for approximately 25% of the individuals, and these results assisted greatly in reconstructing doses to unbadged persons and in developing distributions of dose among military units. This article presents the methodology developed to estimate doses for selected cancer cases and a 1% random sample of the total cohort of veterans under study. PMID:24758578
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and
Directory of Open Access Journals (Sweden)
Kelly C. Chang
2017-11-01
Full Text Available The Comprehensive in vitro Proarrhythmia Assay (CiPA is a global initiative intended to improve drug proarrhythmia risk assessment using a new paradigm of mechanistic assays. Under the CiPA paradigm, the relative risk of drug-induced Torsade de Pointes (TdP is assessed using an in silico model of the human ventricular action potential (AP that integrates in vitro pharmacology data from multiple ion channels. Thus, modeling predictions of cardiac risk liability will depend critically on the variability in pharmacology data, and uncertainty quantification (UQ must comprise an essential component of the in silico assay. This study explores UQ methods that may be incorporated into the CiPA framework. Recently, we proposed a promising in silico TdP risk metric (qNet, which is derived from AP simulations and allows separation of a set of CiPA training compounds into Low, Intermediate, and High TdP risk categories. The purpose of this study was to use UQ to evaluate the robustness of TdP risk separation by qNet. Uncertainty in the model parameters used to describe drug binding and ionic current block was estimated using the non-parametric bootstrap method and a Bayesian inference approach. Uncertainty was then propagated through AP simulations to quantify uncertainty in qNet for each drug. UQ revealed lower uncertainty and more accurate TdP risk stratification by qNet when simulations were run at concentrations below 5× the maximum therapeutic exposure (Cmax. However, when drug effects were extrapolated above 10× Cmax, UQ showed that qNet could no longer clearly separate drugs by TdP risk. This was because for most of the pharmacology data, the amount of current block measured was <60%, preventing reliable estimation of IC50-values. The results of this study demonstrate that the accuracy of TdP risk prediction depends both on the intrinsic variability in ion channel pharmacology data as well as on experimental design considerations that preclude an
Cost benchmarking of railway projects in Europe – dealing with uncertainties in cost estimates
DEFF Research Database (Denmark)
Trabo, Inara
Past experiences in the construction of high-speed railway projects demontrate either positive or negative financial outcomes of the actual project’s budget. Usually some uncertainty value is included into initial budget calculations. Uncertainty is related to the increase of material prices......, difficulties during construction, financial difficulties of the company or mistakes in project initial budget estimation, etc. Such factors may influence the actual budget values and cause budget overruns. According to the research conducted by Prof. B. Flyvbjerg, related to investigation of budget in large......%, later on it was investigated that initial calculations and passenger forecasts were overestimated deliberately in order to get financial support from the government and perform this project. Apart from bad experiences there are also many projects with positive financial outcomes, e.g. French, Dutch...
Cede, Alexander; Luccini, Eduardo; Nuñez, Liliana; Piacentini, Rubén D; Blumthaler, Mario
2002-10-20
The erythemal radiometers of the Ultraviolet Monitoring Network of the Argentine Servicio Meteorológico Nacional were calibrated in an extensive in situ campaign from October 1998 to April 1999 with Austrian reference instruments. Methods to correct the influence of the location's horizon and long-term detector changes are applied. The different terms that contribute to the measurement uncertainty are analyzed. The expanded uncertainty is estimated to be +/- 10% at 70 degrees solar zenith angle (SZA) and +/-6% for a SZA of <50 degrees. We observed significant changes for some detectors over hours and days, reaching a maximum diurnal drift of +/-5% at a SZA of 70 degrees and a maximum weekly variation of +/-4%.
Faris, A M; Wang, H-H; Tarone, A M; Grant, W E
2016-05-31
Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Varouchakis, Emmanouil; Hristopulos, Dionissios
2015-04-01
Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs
Allan, Richard; Liu, Chunlei
2017-04-01
The net surface energy flux is central to the climate system yet observational limitations lead to substantial uncertainty (Trenberth and Fasullo, 2013; Roberts et al., 2016). A combination of satellite-derived radiative fluxes at the top of atmosphere (TOA) adjusted using the latest estimation of the net heat uptake of the Earth system, and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis are used to estimate surface energy flux globally (Liu et al., 2015). Land surface fluxes are adjusted through a simple energy balance approach using relations at each grid point with the consideration of snowmelt to improve regional realism. The energy adjustment is redistributed over the oceans using a weighting function to avoid meridional discontinuities. Uncertainties in surface fluxes are investigated using a variety of approaches including comparison with a range of atmospheric reanalysis input data and products. Zonal multiannual mean surface flux uncertainty is estimated to be less than 5 Wm-2 but much larger uncertainty is likely for regional monthly values. The meridional energy transport is calculated using the net surface heat fluxes estimated in this study and the result shows better agreement with observations in Atlantic than before. The derived turbulent fluxes (difference between the net heat flux and the CERES EBAF radiative flux at surface) also have good agreement with those from OAFLUX dataset and buoy observations. Decadal changes in the global energy budget and the hemisphere energy imbalances are quantified and present day cross-equator heat transports is re-evaluated as 0.22±0.15 PW southward by the atmosphere and 0.32±0.16 PW northward by the ocean considering the observed ocean heat sinks (Roemmich et al., 2006) . Liu et al. (2015) Combining satellite observations and reanalysis energy transports to estimate global net surface energy fluxes 1985-2012. J. Geophys. Res., Atmospheres. ISSN 2169-8996 doi: 10.1002/2015JD
Hauge, I H R; Olerud, H M
2013-06-01
The aim of this study was to reflect on the estimation of the mean glandular dose for women in Norway aged 50-69 y. Estimation of mean glandular dose (MGD) has been conducted by applying the method of Dance et al. (1990, 2000, 2009). Uncertainties in the thickness of approximately ±10 mm adds uncertainties in the MGD of approximately ±10 %, and uncertainty in the glandularity of ±0 % will lead to an uncertainty in the MGD of ±4 %. However, the inherent uncertainty in the air kerma, given by the European protocol on dosimetry, will add an uncertainty of 12 %. The total uncertainty in the MGD is estimated to be ∼20 %, taking into consideration uncertainties in compressed breast thickness (±10 %), the air kerma (12 %), change in HVL by -0.05 mm (-9.0 %), uncertainty in the s-factor of ±2.1 % and changing the glandularity to an age-dependent glandularity distribution (+8.4 %).
Ellipsoidal estimates of reachable sets of impulsive control problems under uncertainty
Matviychuk, O. G.
2017-10-01
For impulsive control systems with uncertainty in initial states and in the system parameters the problem of estimating reachable sets is studied. The initial states are taken to be unknown but bounded with given bounds. Also the matrix included in the differential equations of the system dynamics is uncertain and only bounds on admissible values of this matrix coefficients are known. The problem is considered under additional constraint on the system states. It is assumed that the system states should belong to the given ellipsoid in the state space. We present here the state estimation algorithms that use the special structure of the bilinear impulsive control system and take into account additional restrictions on states and controls. The algorithms are based on ellipsoidal techniques for estimating the trajectory tubes of uncertain dynamical systems.
Estimating uncertainty of the WMO mole fraction scale for carbon dioxide in air
Zhao, Cong Long; Tans, Pieter P.
2006-04-01
The current WMO CO2 Mole Fraction Scale consists of a set of 15 CO2-in-air primary standard calibration gases ranging in CO2 mole fraction from 250 to 520 μmol mol-1. Since the WMO CO2 Expert Group transferred responsibility for maintaining the WMO Scale from the Scripps Institution of Oceanography (SIO) to the Climate Monitoring and Diagnostics Laboratory (CMDL) in 1995, the 15 WMO primary standards have been calibrated, first at SIO and then at regular intervals, between 1 and 2 years, by the CMDL manometric system. The uncertainty of the 15 primary standards was estimated to be 0.069 μmol mol-1 (one-sigma) in the absolute sense. Manometric calibrations results indicate that there is no evidence of overall drift of the Primaries from 1996 to 2004. In order to lengthen the useful life of the Primary standards, CMDL has always transferred the scale via NDIR analyzers to the secondary standards. The uncertainties arising from the analyzer random error and the propagation error due to the uncertainty of the reference gas mole fraction are discussed. Precision of NDIR transfer calibrations was about 0.014 μmol mol-1 from 1979 to present. Propagation of the uncertainty was calculated theoretically. In the case of interpolation the propagation error was estimated to be between 0.06 and 0.07 μmol mol-1 when the Primaries were used as the reference gases via NDIR transfer calibrations. The CMDL secondary standard calibrations are transferred via NDIR analyzers to the working standards, which are used routinely for measuring atmospheric CO2 mole fraction in the WMO Global Atmosphere Watch monitoring program. The uncertainty of the working standards was estimated to be 0.071 μmol mol-1 in the one-sigma absolute scale. Consistency among the working standards is determined by the random errors of downward transfer calibrations at each level and is about 0.02 μmol mol-1. For comparison with an independent absolute scale, the five gravimetric standards from the National
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2017-03-01
Quantitative photoacoustic tomography seeks to estimate the optical parameters of a target given photoacoustic measurements as a data. Conventionally the problem is split into two steps: 1) the acoustical inverse problem of estimating the acoustic initial pressure distribution from the acoustical time series data; 2) the optical inverse problem of estimating the optical absorption and scattering from the initial pressure distributions. In this work, an approach for estimating the optical absorption and scattering directly from the acoustical time series is investigated with simulations. The work combines a homogeneous acoustical forward model, based on the Green's function solution of the wave equation, and a finite element method based diffusion approximation model of light propagation into a single forward model. This model maps the optical parameters of interest into a time domain signal. The model is used with a Bayesian approach to ill-posed inverse problems to form estimates of the posterior distributions for the parameters of interest. In addition to being able to provide point estimates of the parameters of interest, i.e. reconstruct the absorption and scattering distributions, the approach can be used to derive information on the uncertainty associated with the estimates.
Hartmann, A. J.
2016-12-01
Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for
Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation
Energy Technology Data Exchange (ETDEWEB)
Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)
2017-05-15
Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.
Valle, Denis
2011-06-01
Biomass is a fundamental measure in the natural sciences, and numerous models have been developed to forecast timber and fishery yields, forest carbon content, and other environmental services that depend on biomass estimates. We derive general results that reveal how dynamic models that simulate growth as an increase in a linear measure of size (e.g., diameter, length, height) result in biased estimates of future mean biomass when uncertainty in growth is misrepresented. Our case study shows how models of tree growth that predict the same mean diameter increment, but with alternative representations of growth uncertainty, result in almost a threefold difference in the projections of future mean tree biomass after a 20-yr simulation. These results have important implications concerning our ability to accurately predict future biomass and all the related environmental services (e.g., forest carbon content, timber and fishery yields). If the objective is to predict future biomass, we strongly recommend that: (1) ecological modelers should choose a growth model based on a variable more linearly related to biomass (e.g., tree basal area instead of tree diameter for forest models); (2) if field measurements preclude the use of variables other than the linear measure of size, both the mean and other statistical moments (e.g., covariances) should be carefully modeled; (3) careful assessment be done on models that aggregate similar individuals (i.e., cohort models) to see if neglecting autocorrelated growth from individuals leads to biased estimates of future mean biomass.
Directory of Open Access Journals (Sweden)
Ali P. Yunus
2016-04-01
Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.
Seidel, Dian J.; Ao, Chi O.; Li, Kun
2010-08-01
Planetary boundary layer (PBL) processes control energy, water, and pollutant exchanges between the surface and free atmosphere. However, there is no observation-based global PBL climatology for evaluation of climate, weather, and air quality models or for characterizing PBL variability on large space and time scales. As groundwork for such a climatology, we compute PBL height by seven methods, using temperature, potential temperature, virtual potential temperature, relative humidity, specific humidity, and refractivity profiles from a 10 year, 505-station radiosonde data set. Six methods are directly compared; they generally yield PBL height estimates that differ by several hundred meters. Relative humidity and potential temperature gradient methods consistently give higher PBL heights, whereas the parcel (or mixing height) method yields significantly lower heights that show larger and more consistent diurnal and seasonal variations (with lower nighttime and wintertime PBLs). Seasonal and diurnal patterns are sometimes associated with local climatological phenomena, such as nighttime radiation inversions, the trade inversion, and tropical convection and associated cloudiness. Surface-based temperature inversions are a distinct type of PBL that is more common at night and in the morning than during midday and afternoon, in polar regions than in the tropics, and in winter than other seasons. PBL height estimates are sensitive to the vertical resolution of radiosonde data; standard sounding data yield higher PBL heights than high-resolution data. Several sources of both parametric and structural uncertainty in climatological PBL height values are estimated statistically; each can introduce uncertainties of a few 100 m.
Data and model uncertainties in flood-frequency estimation for an urban Swedish catchment
Westerberg, Ida; Persson, Tony
2013-04-01
Floods are extreme events that occur seldom, which means that there are relatively few data of weather and flow conditions during flooding episodes for characterisation of flood frequency. In addition, there are often practical difficulties associated with the measurement of discharge during floods. In this study we used a combination of monitoring and modelling to overcome the lack of reliable discharge data and be able to characterise the flooding problems in the highly urbanised Riseberga Creek catchment in eastern Malmö, Sweden. The study is part of a project, GreenClimeAdapt, in which local stakeholders and researchers work with finding and demonstrating solutions to the flooding problems in the catchment. A high-resolution acoustic doppler discharge gauge was installed in the creek and a hydrologic model was set up to extend this short record for estimation of flood frequency. Discharge uncertainty was estimated based on a stage-discharge analysis and accounted for in model calibration together with uncertainties in the model parameterisation. The model was first used to study the flow variability during the 16 years with available climate input data. Then it was driven with long-term climate realisations from a statistical weather generator to estimate flood frequency for present climate and for future climate changes through continuous simulation. The uncertainty in the modelled flood-frequency for present climate was found to be important, and could partly be reduced in the future using longer monitoring records containing more and higher flood episodes. The climate change scenarios are mainly useful for sensitivity analysis of different adaptation measures that can be taken to reduce the flooding problems.
Energy Technology Data Exchange (ETDEWEB)
Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)
2011-03-01
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment
Directory of Open Access Journals (Sweden)
Lash Timothy L
2007-11-01
Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a
Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.
2013-01-01
The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.
Estimation of the Fuel Depletion Code Bias and Uncertainty in Burnup-Credit Criticality Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Woon; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Lee, Sang Jin; Bae, Chang Yeal [Nuclear Environment Technology Institute, Taejon (Korea, Republic of)
2006-07-01
In the past, criticality safety analyses for commercial light-water-reactor (LWR) spent nuclear fuel (SNF) storage and transportation canisters assumed the spent fuel to be fresh (unirradiated) fuel with uniform isotopic compositions. This fresh-fuel assumption provides a well-defined, bounding approach to the criticality safety analysis that eliminates concerns related to the fuel operating history, and thus considerably simplifies the safety analysis. However, because this assumption ignores the inherent decrease in reactivity as a result of irradiation, it is very conservative. The concept of taking credit for the reduction in reactivity due to fuel burnup is commonly referred to as burnup credit. Implementation of burnup credit requires the computational prediction of the nuclide inventories (compositions) for the dominant fissile and absorbing nuclide species in spent fuel. In addition to that, the bias and uncertainty in the predicted concentration of all nuclides used in the analysis be established by comparisons of calculated and measured radiochemical assay data. In this paper, three methods for considering the bias and uncertainty will be reviewed. The estimated bias and uncertainty that the results of 3rd method are presented.
Hyslop, Nicole P.; White, Warren H.
The Interagency Monitoring of Protected Visual Environments (IMPROVE) program is a cooperative measurement effort in the United States designed to characterize current visibility and aerosol conditions in scenic areas (primarily National Parks and Forests) and to identify chemical species and emission sources responsible for existing man-made visibility impairment. In 2003 and 2004, the IMPROVE network began operating collocated samplers at several sites to evaluate the precision of its aerosol measurements. This paper presents the precisions calculated from the collocated data according to the United States Environmental Protection Agency's guidelines Code of Federal Regulations [CFR, 1997. Revised requirements for designation of reference and equivalent methods for PM 2.5 and ambient air quality surveillance for particulate matter: final rule, 1997. Code of Federal Regulations. Part IV: Environmental Protection Agency, vol. 40 CFR Parts 53 and 58, pp. 71-72. Available from pdf>]. These values range from 4% for sulfate to 115% for the third elemental carbon fraction. Collocated precision tends to improve with increasing detection rates, is typically better when the analysis is performed on the whole filter instead of just a fraction of the filter, and is better for species that are predominantly in the smaller size fractions. The collocated precisions are also used to evaluate the accuracy of the uncertainty estimates that are routinely reported with the concentrations. For most species, the collocated precisions are worse than the precisions predicted by the reported uncertainties. These discrepancies suggest that some sources of uncertainty are not accounted for or have been underestimated.
Uncertainty estimation with bias-correction for flow series based on rating curve
Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta
2014-03-01
Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.
Directory of Open Access Journals (Sweden)
Tobias Pardowitz
2016-10-01
Full Text Available A simple method is presented designed to assess uncertainties from dynamical downscaling of regional high impact weather. The approach makes use of the fact that the choice of the simulation domain for the regional model is to a certain degree arbitrary. Thus, a small ensemble of equally valid simulations can be produced from the same driving model output by shifting the domain by a few of grid cells. Applying the approach to extra-tropical storm systems the regional simulations differ with respect to the exact location and severity of extreme wind speeds. Based on an integrated storm severity measure, the individual ensemble members are found to vary by more than 25 % from the ensemble mean in the majority of episodes considered. Estimates of insured losses based on individual regional simulations and integrated over Germany even differ by more than 50 % from the ensemble mean in most cases. Based on a set of intense storm episodes, a quantification of winter storm losses under recent and future climate is made. Using this domain shift ensemble approach, uncertainty ranges are derived representing the uncertainty inherent to the used downscaling method.
Cost implications of uncertainty in CO2 storage resource estimates: A review
Anderson, Steven T.
2017-01-01
Carbon capture from stationary sources and geologic storage of carbon dioxide (CO2) is an important option to include in strategies to mitigate greenhouse gas emissions. However, the potential costs of commercial-scale CO2 storage are not well constrained, stemming from the inherent uncertainty in storage resource estimates coupled with a lack of detailed estimates of the infrastructure needed to access those resources. Storage resource estimates are highly dependent on storage efficiency values or storage coefficients, which are calculated based on ranges of uncertain geological and physical reservoir parameters. If dynamic factors (such as variability in storage efficiencies, pressure interference, and acceptable injection rates over time), reservoir pressure limitations, boundaries on migration of CO2, consideration of closed or semi-closed saline reservoir systems, and other possible constraints on the technically accessible CO2 storage resource (TASR) are accounted for, it is likely that only a fraction of the TASR could be available without incurring significant additional costs. Although storage resource estimates typically assume that any issues with pressure buildup due to CO2 injection will be mitigated by reservoir pressure management, estimates of the costs of CO2 storage generally do not include the costs of active pressure management. Production of saline waters (brines) could be essential to increasing the dynamic storage capacity of most reservoirs, but including the costs of this critical method of reservoir pressure management could increase current estimates of the costs of CO2 storage by two times, or more. Even without considering the implications for reservoir pressure management, geologic uncertainty can significantly impact CO2 storage capacities and costs, and contribute to uncertainty in carbon capture and storage (CCS) systems. Given the current state of available information and the scarcity of (data from) long-term commercial-scale CO2
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
Directory of Open Access Journals (Sweden)
Douglas Domingues Bueno
2008-01-01
Full Text Available This paper deals with the study of algorithms for robust active vibration control in flexible structures considering uncertainties in system parameters. It became an area of enormous interest, mainly due to the countless demands of optimal performance in mechanical systems as aircraft, aerospace, and automotive structures. An important and difficult problem for designing active vibration control is to get a representative dynamic model. Generally, this model can be obtained using finite element method (FEM or an identification method using experimental data. Actuators and sensors may affect the dynamics properties of the structure, for instance, electromechanical coupling of piezoelectric material must be considered in FEM formulation for flexible and lightly damping structure. The nonlinearities and uncertainties involved in these structures make it a difficult task, mainly for complex structures as spatial truss structures. On the other hand, by using an identification method, it is possible to obtain the dynamic model represented through a state space realization considering this coupling. This paper proposes an experimental methodology for vibration control in a 3D truss structure using PZT wafer stacks and a robust control algorithm solved by linear matrix inequalities.
Golsteijn, Laura; van Zelm, Rosalie; Hendriks, A Jan; Huijbregts, Mark A J
2013-09-01
Since chemicals' ecotoxic effects depend for most soil species on the dissolved concentration in pore water, the equilibrium partitioning (EP) method is generally used to estimate hazardous concentrations (HC50) in the soil from aquatic toxicity tests. The present study analyzes the statistical uncertainty in terrestrial HC50s derived by the EP-method. For 47 organic chemicals, we compared freshwater HC50s derived from standard aquatic ecotoxicity tests with porewater HC50s derived from terrestrial ecotoxicity tests. Statistical uncertainty in the HC50s due to limited species sample size and in organic carbon-water partitioning coefficients due to predictive error was treated with probability distributions propagated by Monte Carlo simulations. Particularly for specifically acting chemicals, it is very important to base the HC50 on a representative sample of species, composed of both target and non-target species. For most chemical groups, porewater HC50 values were approximately a factor of 3 higher than freshwater HC50 values. The ratio of the porewater HC50/freshwater HC50 was typically 3.0 for narcotic chemicals (2.8 for nonpolar and 3.4 for polar narcotics), 0.8 for reactive chemicals, 2.9 for neurotoxic chemicals (4.3 for AChE agents and 0.1 for the cyclodiene type), and 2.5 for herbicides-fungicides. However, the statistical uncertainty associated with this ratio was large (typically 2.3 orders of magnitude). For 81% of the organic chemicals studied, there was no statistical difference between the hazardous concentration of aquatic and terrestrial species. We conclude that possible systematic deviations between the HC50s of aquatic and terrestrial species appear to be less prominent than the overall statistical uncertainty. Copyright © 2013 Elsevier Ltd. All rights reserved.
Inferring the uncertainty of satellite precipitation estimates in data-sparse regions over land
Bytheway, Janice L.; Kummerow, Christian D.
2013-09-01
global distribution of precipitation is essential to understanding earth's water and energy budgets. While developed countries often have reliable precipitation observation networks, our understanding of the distribution of precipitation in data-sparse regions relies on sporadic rain gauges and information gathered by spaceborne sensors. Several multisensor data sets attempt to represent the global distribution of precipitation on subdaily time scales by combining multiple satellite and ground-based observations. Due to limited validation sources and highly variable nature of precipitation, it is difficult to assess the performance of multisensor precipitation products globally. Here, we introduce a methodology to infer the uncertainty of satellite precipitation measurements globally based on similarities between precipitation characteristics in data-sparse and data-rich regions. Five generalized global rainfall regimes are determined based on the probability distribution of 3-hourly accumulated rainfall in 0.25° grid boxes using the Tropical Rainfall Measurement Mission 3B42 product. Uncertainty characteristics for each regime are determined over the United States using the high-quality National Centers for Environmental Prediction Stage IV radar product. The results indicate that the frequency of occurrence of zero and little accumulated rainfall is the key difference between the regimes and that differences in error characteristics are most prevalent at accumulations below ~4 mm/h. At higher accumulations, uncertainty in 3-hourly accumulation converges to ~80%. Using the self-similarity in the five rainfall regimes along with the error characteristics observed for each regime, the uncertainty in 3-hourly precipitation estimates can be inferred in regions that lack quality ground validation sources.
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-03-01
This study addresses estimation of net irrigation requirement over a growing season under climate uncertainty. An ecohydrological model, building upon the stochastic differential equation of soil moisture dynamics, is employed as a basis to derive new analytical expressions for estimating seasonal net irrigation requirement probabilistically. Two distinct irrigation technologies are considered. For micro irrigation technology, probability density function of seasonal net irrigation depth (SNID) is derived assessing transient behavior of a stochastic process which is time integral of dichotomous Markov process. Probability mass function of SNID which is a discrete random variable for traditional irrigation technology is also presented using a marked renewal process with quasi-exponentially-distributed time intervals. Comparing the results obtained from the presented models with those resulted from a Monte Carlo approach verified the significance of the probabilistic expressions derived and assumptions made.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Smith, T.; Marshall, L.
2007-12-01
In many mountainous regions, the single most important parameter in forecasting the controls on regional water resources is snowpack (Williams et al., 1999). In an effort to bridge the gap between theoretical understanding and functional modeling of snow-driven watersheds, a flexible hydrologic modeling framework is being developed. The aim is to create a suite of models that move from parsimonious structures, concentrated on aggregated watershed response, to those focused on representing finer scale processes and distributed response. This framework will operate as a tool to investigate the link between hydrologic model predictive performance, uncertainty, model complexity, and observable hydrologic processes. Bayesian methods, and particularly Markov chain Monte Carlo (MCMC) techniques, are extremely useful in uncertainty assessment and parameter estimation of hydrologic models. However, these methods have some difficulties in implementation. In a traditional Bayesian setting, it can be difficult to reconcile multiple data types, particularly those offering different spatial and temporal coverage, depending on the model type. These difficulties are also exacerbated by sensitivity of MCMC algorithms to model initialization and complex parameter interdependencies. As a way of circumnavigating some of the computational complications, adaptive MCMC algorithms have been developed to take advantage of the information gained from each successive iteration. Two adaptive algorithms are compared is this study, the Adaptive Metropolis (AM) algorithm, developed by Haario et al (2001), and the Delayed Rejection Adaptive Metropolis (DRAM) algorithm, developed by Haario et al (2006). While neither algorithm is truly Markovian, it has been proven that each satisfies the desired ergodicity and stationarity properties of Markov chains. Both algorithms were implemented as the uncertainty and parameter estimation framework for a conceptual rainfall-runoff model based on the
Duchêne, Sebastian; Lanfear, Robert
2015-09-01
Ancestral state reconstruction (ASR) is a popular method for exploring the evolutionary history of traits that leave little or no trace in the fossil record. For example, it has been used to test hypotheses about the number of evolutionary origins of key life-history traits such as oviparity, or key morphological structures such as wings. Many studies that use ASR have suggested that the number of evolutionary origins of such traits is higher than was previously thought. The scope of such inferences is increasing rapidly, facilitated by the construction of very large phylogenies and life-history databases. In this paper, we use simulations to show that the number of evolutionary origins of a trait tends to be overestimated when the phylogeny is not perfect. In some cases, the estimated number of transitions can be several fold higher than the true value. Furthermore, we show that the bias is not always corrected by standard approaches to account for phylogenetic uncertainty, such as repeating the analysis on a large collection of possible trees. These findings have important implications for studies that seek to estimate the number of origins of a trait, particularly those that use large phylogenies that are associated with considerable uncertainty. We discuss the implications of this bias, and methods to ameliorate it. © 2015 Wiley Periodicals, Inc.
On the representation and estimation of spatial uncertainty. [for mobile robot
Smith, Randall C.; Cheeseman, Peter
1987-01-01
This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arising from diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those from an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, if not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.
Uncertainty Analysis of Radar and Gauge Rainfall Estimates in the Russian River Basin
Cifelli, R.; Chen, H.; Willie, D.; Reynolds, D.; Campbell, C.; Sukovich, E.
2013-12-01
Radar Quantitative Precipitation Estimation (QPE) has been a very important application of weather radar since it was introduced and made widely available after World War II. Although great progress has been made over the last two decades, it is still a challenging process especially in regions of complex terrain such as the western U.S. It is also extremely difficult to make direct use of radar precipitation data in quantitative hydrologic forecasting models. To improve the understanding of rainfall estimation and distributions in the NOAA Hydrometeorology Testbed in northern California (HMT-West), extensive evaluation of radar and gauge QPE products has been performed using a set of independent rain gauge data. This study focuses on the rainfall evaluation in the Russian River Basin. The statistical properties of the different gridded QPE products will be compared quantitatively. The main emphasis of this study will be on the analysis of uncertainties of the radar and gauge rainfall products that are subject to various sources of error. The spatial variation analysis of the radar estimates is performed by measuring the statistical distribution of the radar base data such as reflectivity and by the comparison with a rain gauge cluster. The application of mean field bias values to the radar rainfall data will also be described. The uncertainty analysis of the gauge rainfall will be focused on the comparison of traditional kriging and conditional bias penalized kriging (Seo 2012) methods. This comparison is performed with the retrospective Multisensor Precipitation Estimator (MPE) system installed at the NOAA Earth System Research Laboratory. The independent gauge set will again be used as the verification tool for the newly generated rainfall products.
Experimental investigation of synthetic aperture flow angle estimation
DEFF Research Database (Denmark)
Oddershede, Niels; Jensen, Jørgen Arendt
2005-01-01
-correlation as a function of velocity and angle. This paper presents an experimental investigation of this velocity angle estimation method based on a set of synthetic aperture flow data measured using our RASMUS experimental ultrasound system. The measurements are performed for flow angles of 60, 75, and 90 deg...... for the experiments, and the emitted pulse is a 20 micro sec. chirp, linearly sweeping frequencies from approximately 3.5 to 10.5 MHz. The flow angle could be estimated with an average bias up to 5.0 deg., and a average standard deviation between 0.2 deg. and 5.2 deg. Using the angle estimates, the velocity...
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.
Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo
2015-10-01
Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper.
Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar
2012-05-01
Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.
Directory of Open Access Journals (Sweden)
Razi Ahmed
2013-06-01
Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-07
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.
Energy Technology Data Exchange (ETDEWEB)
Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Mackenzie, Alistair; Eales, Timothy D; Dunn, Hannah L; Yip Braidley, Mary; Dance, David R; Young, Kenneth C
2017-07-01
To demonstrate a method of simulating mammography images of the CDMAM phantom and to investigate the coefficient of variation (CoV) in the threshold gold thickness (t T ) measurements associated with use of the phantom. The noise and sharpness of Hologic Dimensions and GE Essential mammography systems were characterized to provide data for the simulation. The simulation method was validated by comparing the t T results of real and simulated images of the CDMAM phantom for three different doses and the two systems. The detection matrices produced from each of 64 images using CDCOM software were randomly resampled to create 512 sets of 8, 16 and 32 images to estimate the CoV of t T . Sets of simulated images for a range of doses were used to estimate the CoVs for a range of diameters and threshold thicknesses. No significant differences were found for t T or the CoV between real and simulated CDMAM images. It was shown that resampling from 256 images was required for estimating the CoV. The CoV was around 4% using 16 images for most of the phantom but is over double that for details near the edge of the phantom. We have demonstrated a method to simulate images of the CDMAM phantom for different systems at a range of doses. We provide data for calculating uncertainties in t T . Any future review of the European guidelines should take into consideration the calculated uncertainties for the 0.1mm detail. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.
2016-10-01
The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.
Assessing uncertainties in GHG emission estimates from Canada's oil sands developments
Kim, M. G.; Lin, J. C.; Huang, L.; Edwards, T. W.; Worthy, D.; Wang, D. K.; Sweeney, C.; White, J. W.; Andrews, A. E.; Bruhwiler, L.; Oda, T.; Deng, F.
2013-12-01
Reducing uncertainties in projections of surface emissions of CO2 and CH4 relies on continuously improving our scientific understanding of the exchange processes between the atmosphere and land at regional scales. In order to enhance our understanding in emission processes and atmospheric transports, an integrated framework that addresses individual natural and anthropogenic factors in a complementary way proves to be invaluable. This study presents an example of top-down inverse modeling that utilizes high precision measurement data collected at a Canadian greenhouse gas monitoring site. The measurements include multiple tracers encompassing standard greenhouse gas species, stable isotopes of CO2, and combustion-related species. The potential for the proposed analysis framework is demonstrated using Stochastic Time-Inverted Lagrangian Transport (STILT) model runs to yield a unique regional-scale constraint that can be used to relate the observed changes of tracer concentrations to the processes in their upwind source regions. The uncertainties in emission estimates are assessed using different transport fields and background concentrations coupled with the STILT model. Also, methods to further reduce uncertainties in the retrieved emissions by incorporating additional constraints including tracer-to-tracer correlations and satellite measurements are briefly discussed. The inversion approach both reproduces source areas in a spatially explicit way through sophisticated Lagrangian transport modeling and infers emission processes that leave imprints on atmospheric tracers. The results indicate that the changes in greenhouse gas concentration are strongly influenced by regional sources, including significant contributions from fossil fuel emissions, and that the integrated approach can be used for regulatory regimes to verify reported emissions of the greenhouse gas from oil sands developments.
Hauswald, Lorenz
For every physics measurement it is essential to precisely know its uncertainty. At the search for neutral MSSM Higgs bosons the decay channel to two tau leptons plays a major role. For this channel the influence of the uncertainty of Monte Carlo generator parameters on the acceptance of the analyses is studied in this thesis. The corresponding uncertainties are found to be in the range of $0$ to $30\\,\\%$. Physics with tau leptons is exceptionally challenging, because they can decay into hadrons, which, at the Large Hadron Collider, are hardly distinguishable from the overwhelming QCD background. Sophisticated multivariate algorithms aim to identify them correctly. The impact of experimental uncertainties on the outcome of the identification algortihms is subject of the second part of the thesis. The uncertainties obtained are below $7\\,\\%$.
Directory of Open Access Journals (Sweden)
Chao Yang
2017-06-01
Full Text Available In this paper, we examined travelers’ dynamic mode choice behavior under travel time variability. We found travelers’ inconsistent risk attitudes through a binary mode choice experiment. Although the results deviated from the traditional utility maximization theory and could not be explained by the payoff variability effect, they could be well captured in a cumulative prospect theory (CPT framework. After considering the imperfect memory effect, we found that the prediction ability of the cumulative prospect theory learning (CPTL model could be significantly improved. The experimental results were also compared with the CPTL model and the reinforcement learning (REL model. This study empirically showed the potential of alternative theories to better capture travelers’ day-to-day mode choice behavior under uncertainty. A new definition of willingness to pay (WTP in a CPT framework was provided to explicitly consider travelers’ perceived value increases in travel time.
Directory of Open Access Journals (Sweden)
M. P. Mittermaier
2008-05-01
Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.
The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik
2008-01-01
within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety...... that require significant computational time to run and produce the desired output. In this paper we improve the computational efficiency of GLUE by sampling the prior parameter space using an adaptive Markov Chain Monte Carlo scheme (the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm). Moreover, we......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...
DEFF Research Database (Denmark)
Wang, Weizhi; Wu, Minghao; Palm, Johannes
2018-01-01
The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... mathematical models such as computational fluid dynamics are preferred and over the last 5 years, computational fluid dynamics has become more frequently used in the wave energy field. However, rigorous estimation of numerical errors, convergence rates and uncertainties associated with computational fluid...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...
Directory of Open Access Journals (Sweden)
Igor Stubelj
2014-03-01
Full Text Available The paper deals with the estimation of weighted average cost of capital (WACC for regulated industries in developing financial markets from the perspective of the current financial-economic crisis. In current financial market situation some evident changes have occurred: risk-free rates in solid and developed financial markets (e. g. USA, Germany have fallen, but due to increased market volatility, the risk premiums have increased. The latter is especially evident in transition economies where the amplitude of market volatility is extremely high. In such circumstances, there is a question of how to calculate WACC properly. WACC is an important measure in financial management decisions and in our case, business regulation. We argue in the paper that the most accurate method for calculating WACC is the estimation of the long-term WACC, which takes into consideration a long-term stable yield of capital and not the current market conditions. Following this, we propose some solutions that could be used for calculating WACC for regulated industries on the developing financial markets in times of market uncertainty. As an example, we present an estimation of the capital cost for a selected Slovenian company, which operates in the regulated industry of electric distribution.
Uncertainty analysis of an optical method for pressure estimation in fluid flows
Gomit, Guillaume; Acher, Gwenael; Chatellier, Ludovic; David, Laurent
2018-02-01
The analysis of the error propagation from the velocity field to the pressure field using the pressure estimation method proposed by Jeon et al (2015 11th Int. In Symp. Part. Image Velocim. PIV15) is achieved. The accuracy of the method is assessed based on numerical data. The flow around a rigid profile (NACA0015) with a free tip is considered. From the numerical simulation data, tomographic-PIV (TPIV)-like data are generated. Two types of error are used to distort the data: a Gaussian noise and a pixel-locking effect are modelled. Propagation of both types of error during the pressure estimation process and the effect of the TPIV resolution are evaluated. Results highlight the importance of the resolution to accurately estimate the pressure in presence of small structures but also to limit the propagation of error from the velocity to the pressure. The study of the sensitivity of the method for the two models of errors, Gaussian or pixel-locking, shows different trends. This reveals also the importance of the model of errors for the analysis of the uncertainties for PIV-based pressure.
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru [Fuel Cell System Group, Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-}0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within {+-}1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably. (author)
Energy Technology Data Exchange (ETDEWEB)
Yohei Tanaka; Akihiko Momma; Ken Kato; Akira Negishi; Kiyonami Takano; Ken Nozaki; Tohru Kato [National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki (Japan). Fuel Cell System Group, Energy Technology Research Institute
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-} 0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within 1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably.
Nakayachi, Kazuya; B Johnson, Branden; Koketsu, Kazuki
2017-08-29
We test here the risk communication proposition that explicit expert acknowledgment of uncertainty in risk estimates can enhance trust and other reactions. We manipulated such a scientific uncertainty message, accompanied by probabilities (20%, 70%, implicit ["will occur"] 100%) and time periods (10 or 30 years) in major (≥magnitude 8) earthquake risk estimates to test potential effects on residents potentially affected by seismic activity on the San Andreas fault in the San Francisco Bay Area (n = 750). The uncertainty acknowledgment increased belief that these specific experts were more honest and open, and led to statistically (but not substantively) significant increases in trust in seismic experts generally only for the 20% probability (vs. certainty) and shorter versus longer time period. The acknowledgment did not change judged risk, preparedness intentions, or mitigation policy support. Probability effects independent of the explicit admission of expert uncertainty were also insignificant except for judged risk, which rose or fell slightly depending upon the measure of judged risk used. Overall, both qualitative expressions of uncertainty and quantitative probabilities had limited effects on public reaction. These results imply that both theoretical arguments for positive effects, and practitioners' potential concerns for negative effects, of uncertainty expression may have been overblown. There may be good reasons to still acknowledge experts' uncertainties, but those merit separate justification and their own empirical tests. © 2017 Society for Risk Analysis.
Directory of Open Access Journals (Sweden)
Gurkan eSin
2015-02-01
Full Text Available Capital investment, next to the product demand, sales and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early stage design is a challenging task. This is especially important in biorefinery research, where available information and experiences with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs (a Bootstrapping as a regression method when cost data is available and (b the Monte Carlo technique as an error propagation method based on expert input when cost data is not available. Four well-known models for early stage cost estimation are reviewed an analyzed using the methodology. The significance of uncertainties of cost data for early stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision making under uncertainties. One of the results using an order-of-magnitude estimate shows that the production of diethyl ether and 1,3-butadiene are the most promising with economic risks of 0.24 MM$/a and 4.6 MM$/a due to uncertainties in cost estimations, respectively.
Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis
2017-04-01
Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, N.; Cabellos, O. [Madrid Polytechnic Univ., Dept. of Nuclear Engineering (Spain); Cabellos, O.; Sanz, J. [Madrid Polytechnic Univ., 2 Instituto de Fusion Nuclear (Spain); Sanz, J. [Univ. Nacional Educacion a Distancia, Dept. of Power Engineering, Madrid (Spain)
2005-07-01
We present a new code system which combines the Monte Carlo neutron transport code MCNP-4C and the inventory code ACAB as a suitable tool for high burnup calculations. Our main goal is to show that the system, by means of ACAB capabilities, enables us to assess the impact of neutron cross section uncertainties on the inventory and other inventory-related responses in high burnup applications. The potential impact of nuclear data uncertainties on some response parameters may be large, but only very few codes exist which can treat this effect. In fact, some of the most reported effective code systems in dealing with high burnup problems, such as CASMO-4, MCODE and MONTEBURNS, lack this capability. As first step, the potential of our system, ruling out the uncertainty capability, has been compared with that of those code systems, using a well referenced high burnup pin-cell benchmark exercise. It is proved that the inclusion of ACAB in the system allows to obtain results at least as reliable as those obtained using other inventory codes, such as ORIGEN2. Later on, the uncertainty analysis methodology implemented in ACAB, including both the sensitivity-uncertainty method and the uncertainty analysis by the Monte Carlo technique, is applied to this benchmark problem. We estimate the errors due to activation cross section uncertainties in the prediction of the isotopic content up to the high-burnup spent fuel regime. The most relevant uncertainties are remarked, and some of the most contributing cross sections to those uncertainties are identified. For instance, the most critical reaction for Am{sup 242m} is Am{sup 241}(n,{gamma}-m). At 100 MWd/kg, the cross-section uncertainty of this reaction induces an error of 6.63% on the Am{sup 242m} concentration.The uncertainties in the inventory of fission products reach up to 30%.
Ziemann, Astrid; Starke, Manuela; Schütze, Claudia
2017-11-01
An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single
Directory of Open Access Journals (Sweden)
A. Ziemann
2017-11-01
Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper
Elsworth, D.
2013-12-01
Significant uncertainties remain and influence the recovery of energy from the subsurface. These uncertainties include the fate and transport of long-lived radioactive wastes that result from the generation of nuclear power and have been the focus of an active network of international underground research laboratories dating back at least 35 years. However, other nascent carbon-free energy technologies including conventional and EGS geothermal methods, carbon-neutral methods such as carbon capture and sequestration and the utilization of reduced-carbon resources such as unconventional gas reservoirs offer significant challenges in their effective deployment. We illustrate the important role that in situ experiments may play in resolving behaviors at extended length- and time-scales for issues related to chemical-mechanical interactions. Significantly, these include the evolution of transport and mechanical characteristics of stress-sensitive fractured media and their influence of the long-term behavior of the system. Importantly, these interests typically relate to either creating reservoirs (hydroshearing in EGS reservoirs, artificial fractures in shales and coals) or maintaining seals at depth where the permeating fluids may include mixed brines, CO2, methane and other hydrocarbons. Critical questions relate to the interaction of these various fluid mixtures and compositions with the fractured substrate. Important needs are in understanding the roles of key processes (transmission, dissolution, precipitation, sorption and dynamic stressing) on the modification of effective stresses and their influence on the evolution of permeability, strength and induced seismicity on the resulting development of either wanted or unwanted fluid pathways. In situ experimentation has already contributed to addressing some crucial issues of these complex interactions at field scale. Important contributions are noted in understanding the fate and transport of long-lived wastes
It Pays to Compare: An Experimental Study on Computational Estimation
Star, Jon R.; Rittle-Johnson, Bethany
2009-01-01
Comparing and contrasting examples is a core cognitive process that supports learning in children and adults across a variety of topics. In this experimental study, we evaluated the benefits of supporting comparison in a classroom context for children learning about computational estimation. Fifth- and sixth-grade students (N = 157) learned about…
Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.
2016-01-01
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
New Measurement Method and Uncertainty Estimation for Plate Dimensions and Surface Quality
Directory of Open Access Journals (Sweden)
Salah H. R. Ali
2013-01-01
Full Text Available Dimensional and surface quality for plate production control is facing difficult engineering challenges. One of these challenges is that plates in large-scale mass production contain geometric uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007, and TIS 2398-2: 2008. A proposed measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of new method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results for centre of curvature (CC, centre of edge (CE, warpage (W, and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results not only showed better accurate new method but also improved the quality of plate products significantly.
Souverijns, N.; Gossart, A.; Lhermitte, S.; Gorodetskaya, I. V.; Kneifel, S.; Maahn, M.; Bliven, F. L.; van Lipzig, N. P. M.
2017-11-01
Snowfall rate (SR) estimates over Antarctica are sparse and characterised by large uncertainties. Yet, observations by precipitation radar offer the potential to get better insight in Antarctic SR. Relations between radar reflectivity (Ze) and snowfall rate (Ze-SR relations) are however not available over Antarctica. Here, we analyse observations from the first Micro Rain Radar (MRR) in Antarctica together with an optical disdrometer (Precipitation Imaging Package; PIP), deployed at the Princess Elisabeth station. The relation Ze = A*SRB was derived using PIP observations and its uncertainty was quantified using a bootstrapping approach, randomly sampling within the range of uncertainty. This uncertainty was used to assess the uncertainty in snowfall rates derived by the MRR. We find a value of A = 18 [11-43] and B = 1.10 [0.97-1.17]. The uncertainty on snowfall rates of the MRR based on the Ze-SR relation are limited to 40%, due to the propagation of uncertainty in both Ze as well as SR, resulting in some compensation. The prefactor (A) of the Ze-SR relation is sensitive to the median diameter of the snow particles. Larger particles, typically found closer to the coast, lead to an increase of the value of the prefactor (A = 44). Smaller particles, typical of more inland locations, obtain lower values for the prefactor (A = 7). The exponent (B) of the Ze-SR relation is insensitive to the median diameter of the snow particles. In contrast with previous studies for various locations, shape uncertainty is not the main source of uncertainty of the Ze-SR relation. Parameter uncertainty is found to be the most dominant term, mainly driven by the uncertainty in mass-size relation of different snow particles. Uncertainties on the snow particle size distribution are negligible in this study as they are directly measured. Future research aiming at reducing the uncertainty of Ze-SR relations should therefore focus on obtaining reliable estimates of the mass-size relations of
Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip.
Paesani, S; Gentile, A A; Santagati, R; Wang, J; Wiebe, N; Tew, D P; O'Brien, J L; Thompson, M G
2017-03-10
Quantum phase estimation is a fundamental subroutine in many quantum algorithms, including Shor's factorization algorithm and quantum simulation. However, so far results have cast doubt on its practicability for near-term, nonfault tolerant, quantum devices. Here we report experimental results demonstrating that this intuition need not be true. We implement a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a silicon quantum photonic device. The approach is verified to be well suited for prethreshold quantum processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. This shows a promising route to unlock the power of quantum phase estimation much sooner than previously believed.
Souverijns, Niels; Gossart, Alexandra; Lhermitte, Stef; Gorodetskaya, Irina; Kneifel, Stefan; Maahn, Maximilian; Bliven, Francis; van Lipzig, Nicole
2017-04-01
The Antarctic Ice Sheet (AIS) is the largest ice body on earth, having a volume equivalent to 58.3 m global mean sea level rise. Precipitation is the dominant source term in the surface mass balance of the AIS. However, this quantity is not well constrained in both models and observations. Direct observations over the AIS are also not coherent, as they are sparse in space and time and acquisition techniques differ. As a result, precipitation observations stay mostly limited to continent-wide averages based on satellite radar observations. Snowfall rate (SR) at high temporal resolution can be derived from the ground-based radar effective reflectivity factor (Z) using information about snow particle size and shape. Here we present reflectivity snowfall rate relations (Z = aSRb) for the East Antarctic escarpment region using the measurements at the Princess Elisabeth (PE) station and an overview of their uncertainties. A novel technique is developed by combining an optical disdrometer (NASA's Precipitation Imaging Package; PIP) and a vertically pointing 24 GHz FMCW micro rain radar (Metek's MRR) in order to reduce the uncertainty in SR estimates. PIP is used to obtain information about snow particle characteristics and to get an estimate of Z, SR and the Z-SR relation. For PE, located 173 km inland, the relation equals Z = 18SR1.1. The prefactor (a) of the relation is sensitive to the median diameter of the particles. Larger particles, found closer to the coast, lead to an increase of the value of the prefactor. More inland locations, where smaller snow particles are found, obtain lower values for the prefactor. The exponent of the Z-SR relation (b) is insensitive to the median diameter of the snow particles. This dependence of the prefactor of the Z-SR relation to the particle size needs to be taken into account when converting radar reflectivities to snowfall rates over Antarctica. The uncertainty on the Z-SR relations is quantified using a bootstrapping approach
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Experimental Research Examining How People Can Cope with Uncertainty Through Soft Haptic Sensations.
van Horen, Femke; Mussweiler, Thomas
2015-09-16
Human beings are constantly surrounded by uncertainty and change. The question arises how people cope with such uncertainty. To date, most research has focused on the cognitive strategies people adopt to deal with uncertainty. However, especially when uncertainty is due to unpredictable societal events (e.g., economical crises, political revolutions, terrorism threats) of which one is unable to judge the impact on one's future live, cognitive strategies (like seeking additional information) is likely to fail to combat uncertainty. Instead, the current paper discusses a method demonstrating that people might deal with uncertainty experientially through soft haptic sensations. More specifically, because touching something soft creates a feeling of comfort and security, people prefer objects with softer as compared to harder properties when feeling uncertain. Seeking for softness is a highly efficient and effective tool to deal with uncertainty as our hands are available at all times. This protocol describes a set of methods demonstrating 1) how environmental (un)certainty can be situationally activated with an experiential priming procedure, 2) that the quality of the softness experience (what type of softness and how it is experienced) matters and 3) how uncertainty can be reduced using different methods.
Meszaros, Lorinc; El Serafy, Ghada
2017-04-01
Phytoplankton blooms in coastal ecosystems such as the Wadden Sea may cause mortality of mussels and other benthic organisms. Furthermore, the algal primary production is the base of the food web and therefore it greatly influences fisheries and aquacultures. Consequently, accurate phytoplankton concentration prediction offers ecosystem and economic benefits. Numerical ecosystem models are powerful tools to compute water quality variables including the phytoplankton concentration. Nevertheless, their accuracy ultimately depends on the uncertainty stemming from the external forcings which further propagates and complicates by the non-linear ecological processes incorporated in the ecological model. The Wadden Sea is a shallow, dynamically varying ecosystem with high turbidity and therefore the uncertainty in the Suspended Particulate Matter (SPM) concentration field greatly influences the prediction of water quality variables. Considering the high level of uncertainty in the modelling process, it is advised that an uncertainty estimate should be provided together with a single-valued deterministic model output. Through the use of an ensemble prediction system in the Dutch coastal waters the uncertainty in the modelled chlorophyll-a concentration has been estimated. The input ensemble is generated from perturbed model process parameters and external forcings through Latin hypercube sampling with dependence (LHSD). The simulation is carried out using the Delft3D Generic Ecological Model (GEM) with the advance algal speciation module-BLOOM which is sufficiently well validated for primary production simulation in the southern North Sea. The output ensemble is post-processed to obtain the uncertainty estimate and the results are validated against in-situ measurements and Remote Sensing (RS) data. The spatial uncertainty of chlorophyll-a concentration was derived using the produced ensemble spread maps. *This work has received funding from the European Union's Horizon
J. Florian Wellmann
2013-01-01
The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are ...
Directory of Open Access Journals (Sweden)
J. Florian Wellmann
2013-04-01
Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.
Pairing in neutron matter: New uncertainty estimates and three-body forces
Drischler, C.; Krüger, T.; Hebeler, K.; Schwenk, A.
2017-02-01
We present solutions of the BCS gap equation in the channels S10 and P32-F32 in neutron matter based on nuclear interactions derived within chiral effective field theory (EFT). Our studies are based on a representative set of nonlocal nucleon-nucleon (NN) plus three-nucleon (3N) interactions up to next-to-next-to-next-to-leading order (N3LO ) as well as local and semilocal chiral NN interactions up to N2LO and N4LO , respectively. In particular, we investigate for the first time the impact of subleading 3N forces at N3LO on pairing gaps and also derive uncertainty estimates by taking into account results for pairing gaps at different orders in the chiral expansion. Finally, we discuss different methods for obtaining self-consistent solutions of the gap equation. Besides the widely used quasilinear method by Khodel et al., we demonstrate that the modified Broyden method is well applicable and exhibits a robust convergence behavior. In contrast to Khodel's method it is based on a direct iteration of the gap equation without imposing an auxiliary potential and is straightforward to implement.
Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation
Tan, Xiaosi
2014-08-05
Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.
Experimental investigation of transverse flow estimation using transverse oscillation
DEFF Research Database (Denmark)
Udesen, Jesper; Jensen, Jørgen Arendt
2003-01-01
Conventional ultrasound scanners can only display the blood velocity component parallel to the ultrasound beam. Introducing a laterally oscillating field gives signals from which the transverse velocity component can be estimated using 2:1 parallel receive beamformers. To yield the performance...... with a mean relative bias of 6.3% and a mean relative standard deviation of 5.4% over the entire vessel length. With the experimental ultrasound scanner RASMUS the simulations are reproduced in an experimental flow phantom using a linear array transducer and vessel characteristics as in the simulations....... The flow is generated with the Compuflow 1000 programmable flow pump giving a parabolic velocity profile of the blood mimicking fluid in the flow phantom. The profiles are estimated for 310 trials each containing of 32 data vectors. The relative mean bias over entire blood vessel is found to be 10...
Abichou, Tarek; Clark, Jeremy; Tan, Sze; Chanton, Jeffery; Hater, Gary; Green, Roger; Goldsmith, Doug; Barlaz, Morton A; Swan, Nathan
2010-04-01
Landfills represent a source of distributed emissions source over an irregular and heterogeneous surface. In the method termed "Other Test Method-10" (OTM-10), the U.S. Environmental Protection Agency (EPA) has proposed a method to quantify emissions from such sources by the use of vertical radial plume mapping (VRPM) techniques combined with measurement of wind speed to determine the average emission flux per unit area per time from nonpoint sources. In such application, the VRPM is used as a tool to estimate the mass of the gas of interest crossing a vertical plane. This estimation is done by fitting the field-measured concentration spatial data to a Gaussian or some other distribution to define a plume crossing the vertical plane. When this technique is applied to landfill surfaces, the VRPM plane may be within the emitting source area itself. The objective of this study was to investigate uncertainties associated with using OTM-10 for landfills. The spatial variability of emission in the emitting domain can lead to uncertainties of -34 to 190% in the measured flux value when idealistic scenarios were simulated. The level of uncertainty might be higher when the number and locations of emitting sources are not known (typical field conditions). The level of uncertainty can be reduced by improving the layout of the VRPM plane in the field in accordance with an initial survey of the emission patterns. The change in wind direction during an OTM-10 testing setup can introduce an uncertainty of 20% of the measured flux value. This study also provides estimates of the area contributing to flux (ACF) to be used in conjunction with OTM-10 procedures. The estimate of ACF is a function of the atmospheric stability class and has an uncertainty of 10-30%.
Scaling Factor Estimation Using Optimized Mass Change Strategy, Part 2: Experimental Results
DEFF Research Database (Denmark)
Fernández, Pelayo Fernández; Aenlle, Manuel López; Garcia, Luis M. Villa
2007-01-01
of the structure. On the other hand, the aforementioned objectives are difficult to achieve for all modes simultaneously. Thus, a study of the number, magnitude and location of the masses must be performed previously to the modal tests. In this paper, the mass change method was applied to estimate the scaling......The mass change method is used to estimate the scaling factors, the uncertainty is reduced when, for each mode, the frequency shift is maximized and the changes in the mode shapes are minimized, which in turn, depends on the mass change strategy chosen to modify the dynamic behavior...... factors of a steel cantilever beam. The effect of the mass change strategy was experimentally studied by performing several modal tests in which the magnitude, the location and the number of the attached masses were changed....
David M. Bell; Eric J. Ward; A. Christopher Oishi; Ram Oren; Paul G. Flikkema; James S. Clark; David Whitehead
2015-01-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as...
This paper presents a new method based on a statistical approach of estimating the uncertainty in simulating the transport and dispersion of atmospheric pollutants. The application of the method has been demonstrated by using observations and modeling results from a tracer experi...
DEFF Research Database (Denmark)
Cheali, Peam; Gernaey, Krist; Sin, Gürkan
2015-01-01
robust decision-making under uncertainties. One of the results using order-of-magnitude estimates shows that the production of diethyl ether and 1,3-butadiene are the most promising with the lowest economic risks (among the alternatives considered) of 0.24 MM$/a and 4.6 MM$/a, respectively....
Lopez Lopez, P.; Verkade, J.S.; Weerts, A.H.; Solomatine, D.P.
2014-01-01
The present study comprises an intercomparison of different configurations of a statistical post-processor that is used to estimate predictive hydrological uncertainty. It builds on earlier work by Weerts, Winsemius and Verkade (2011; hereafter referred to as WWV2011), who used the quantile
López López, P.; Verkade, J.S.; Weerts, A.H.; Solomatine, D.P.
2014-01-01
The present study comprises an intercomparison of different configurations of a statistical post-processor that is used to estimate predictive hydrological uncertainty. It builds on earlier work by Weerts, Winsemius and Verkade (2011; hereafter referred to as WWV2011), who used the quantile
Saikawa, Eri; Trail, Marcus; Zhong, Min; Wu, Qianru; Young, Cindy L.; Janssens-Maenhout, Greet; Klimont, Zbigniew; Wagner, Fabian; Kurokawa, Jun-ichi; Singh Nagpure, Ajay; Ram Gurjar, Bhola
2017-05-01
Greenhouse gas and air pollutant precursor emissions have been increasing rapidly in India. Large uncertainties exist in emissions inventories and quantification of their uncertainties is essential for better understanding of the linkages among emissions and air quality, climate, and health. We use Monte Carlo methods to assess the uncertainties of the existing carbon dioxide (CO2), carbon monoxide (CO), sulfur dioxide (SO2), nitrogen oxides (NOx), and particulate matter (PM) emission estimates from four source sectors for India. We also assess differences in the existing emissions estimates within the nine subnational regions. We find large uncertainties, higher than the current estimates for all species other than CO, when all the existing emissions estimates are combined. We further assess the impact of these differences in emissions on air quality using a chemical transport model. More efforts are needed to constrain emissions, especially in the Indo-Gangetic Plain, where not only the emissions differences are high but also the simulated concentrations using different inventories. Our study highlights the importance of constraining SO2, NOx, and NH3 emissions for secondary PM concentrations.
Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.
2007-12-01
The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
Uncertainty in age-specific harvest estimates and consequences for white-tailed deer management
Collier, B.A.; Krementz, D.G.
2007-01-01
Age structure proportions (proportion of harvested individuals within each age class) are commonly used as support for regulatory restrictions and input for deer population models. Such use requires critical evaluation when harvest regulations force hunters to selectively harvest specific age classes, due to impact on the underlying population age structure. We used a stochastic population simulation model to evaluate the impact of using harvest proportions to evaluate changes in population age structure under a selective harvest management program at two scales. Using harvest proportions to parameterize the age-specific harvest segment of the model for the local scale showed that predictions of post-harvest age structure did not vary dependent upon whether selective harvest criteria were in use or not. At the county scale, yearling frequency in the post-harvest population increased, but model predictions indicated that post-harvest population size of 2.5 years old males would decline below levels found before implementation of the antler restriction, reducing the number of individuals recruited into older age classes. Across the range of age-specific harvest rates modeled, our simulation predicted that underestimation of age-specific harvest rates has considerable influence on predictions of post-harvest population age structure. We found that the consequence of uncertainty in harvest rates corresponds to uncertainty in predictions of residual population structure, and this correspondence is proportional to scale. Our simulations also indicate that regardless of use of harvest proportions or harvest rates, at either the local or county scale the modeled SHC had a high probability (>0.60 and >0.75, respectively) of eliminating recruitment into >2.5 years old age classes. Although frequently used to increase population age structure, our modeling indicated that selective harvest criteria can decrease or eliminate the number of white-tailed deer recruited into older
Verhulst, Kristal R.; Karion, Anna; Kim, Jooil; Salameh, Peter K.; Keeling, Ralph F.; Newman, Sally; Miller, John; Sloop, Christopher; Pongetti, Thomas; Rao, Preeti; Wong, Clare; Hopkins, Francesca M.; Yadav, Vineet; Weiss, Ray F.; Duren, Riley M.; Miller, Charles E.
2017-07-01
We report continuous surface observations of carbon dioxide (CO2) and methane (CH4) from the Los Angeles (LA) Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO) and offshore on San Clemente Island (SCI), one continental site located in Victorville (VIC), in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO) in the San Gabriel Mountains. We find that a local marine background can be established to within ˜ 1 ppm CO2 and ˜ 10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California) site near downtown LA exhibits median hourly enhancements of ˜ 20 ppm CO2 and ˜ 150 ppb CH4 during 2015 as well as ˜ 15 ppm CO2 and ˜ 80 ppb CH4 during mid-afternoon hours (12:00-16:00 LT, local time), which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014). The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much larger than the measurement uncertainty. The background uncertainty for the marine
Pulles, M.P.J.; Kok, H.; Quass, U.
2006-01-01
This study uses an improved emission inventory model to assess the uncertainties in emissions of dioxins and furans associated with both knowledge on the exact technologies and processes used, and with the uncertainties of both activity data and emission factors. The annual total emissions for the
Addressing uncertainties in estimates of recoverable gas for underexplored Shale gas Basins
Heege, J.H. ter; Zijp, M.H.A.A.; Bruin, G. de; Veen, J.H. ten
2014-01-01
Uncertainties in upfront predictions of hydraulic fracturing and gas production of underexplored shale gas targets are important as often large potential resources are deduced based on limited available data. In this paper, uncertainties are quantified by using normal distributions of different
Comparison of uncertainties in carbon sequestration estimates for a tropical and a temperate forest
Nabuurs, G.J.; Putten, van B.; Knippers, T.S.; Mohren, G.M.J.
2008-01-01
We compare uncertainty through sensitivity and uncertainty analyses of the modelling framework CO2FIX V.2. We apply the analyses to a Central European managed Norway spruce stand and a secondary tropical forest in Central America. Based on literature and experience we use three standard groups to
Kenneth E. Skog; Kim Pingoud; James E. Smith
2004-01-01
A method is suggested for estimating additions to carbon stored in harvested wood products (HWP) and for evaluating uncertainty. The method uses data on HWP production and trade from several decades and tracks annual additions to pools of HWP in use, removals from use, additions to solid waste disposal sites (SWDS), and decay from SWDS. The method is consistent with...
AUTHOR|(INSPIRE)INSPIRE-00534683; The ATLAS collaboration
2016-01-01
The jet energy scale (JES) uncertainty is estimated using different methods at different pT ranges. In situ techniques exploiting the pT balance between a jet and a reference object (e.g. Z or gamma) are used at lower pT, but at very high pT (> 2.5 TeV) there is not enough statistics for in-situ techniques. The JES uncertainty at high-pT is important in several searches for new phenomena, e.g. the dijet resonance and angular searches. In the highest pT range, the JES uncertainty is estimated using the calorimeter response to single hadrons. In this method, jets are treated as a superposition of energy depositions of single particles. An uncertainty is applied to each energy depositions belonging to the particles within the jet, and propagated to the final jet energy scale. This poster presents the JES uncertainty found with this method at sqrt(s) = 8 TeV and its developments.
Energy Technology Data Exchange (ETDEWEB)
Ohnishi, S., E-mail: ohnishi@nmri.go.jp [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan); Thornton, B. [Institute of Industrial Science, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Kamada, S.; Hirao, Y.; Ura, T.; Odano, N. [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan)
2016-05-21
Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.
Added Value of uncertainty Estimates of SOurce term and Meteorology (AVESOME)
DEFF Research Database (Denmark)
Sørensen, Jens Havskov; Schönfeldt, Fredrik; Sigg, Robert
uncertainty in atmospheric dispersion model forecasting stemming from both the source term and the meteorological data is examined. Ways to implement the uncertainties of forecasting in DSSs, and the impacts on real-time emergency management are described. The proposed methodology allows for efficient real-time.......g. at national meteorological services, the proposed methodology is feasible for real-time use, thereby adding value to decision support. In the recent NKS-B projects MUD, FAUNA and MESO, the implications of meteorological uncertainties for nuclear emergency preparedness and management have been studied......, and means for operational real-time assessment of the uncertainties in a nuclear DSS have been described and demonstrated. In AVESOME, we address the uncertainty of the radionuclide source term, i.e. the amounts of radionuclides released and the temporal evolution of the release. Furthermore, the combined...
Directory of Open Access Journals (Sweden)
B. Ford
2016-03-01
Full Text Available The negative impacts of fine particulate matter (PM2.5 exposure on human health are a primary motivator for air quality research. However, estimates of the air pollution health burden vary considerably and strongly depend on the data sets and methodology. Satellite observations of aerosol optical depth (AOD have been widely used to overcome limited coverage from surface monitoring and to assess the global population exposure to PM2.5 and the associated premature mortality. Here we quantify the uncertainty in determining the burden of disease using this approach, discuss different methods and data sets, and explain sources of discrepancies among values in the literature. For this purpose we primarily use the MODIS satellite observations in concert with the GEOS-Chem chemical transport model. We contrast results in the United States and China for the years 2004–2011. Using the Burnett et al. (2014 integrated exposure response function, we estimate that in the United States, exposure to PM2.5 accounts for approximately 2 % of total deaths compared to 14 % in China (using satellite-based exposure, which falls within the range of previous estimates. The difference in estimated mortality burden based solely on a global model vs. that derived from satellite is approximately 14 % for the US and 2 % for China on a nationwide basis, although regionally the differences can be much greater. This difference is overshadowed by the uncertainty in the methodology for deriving PM2.5 burden from satellite observations, which we quantify to be on the order of 20 % due to uncertainties in the AOD-to-surface-PM2.5 relationship, 10 % due to the satellite observational uncertainty, and 30 % or greater uncertainty associated with the application of concentration response functions to estimated exposure.
Dead time effect on the Brewer measurements: correction and estimated uncertainties
Fountoulakis, Ilias; Redondas, Alberto; Bais, Alkiviadis F.; José Rodriguez-Franco, Juan; Fragkos, Konstantinos; Cede, Alexander
2016-04-01
Brewer spectrophotometers are widely used instruments which perform spectral measurements of the direct, the scattered and the global solar UV irradiance. By processing these measurements a variety of secondary products can be derived such as the total columns of ozone (TOC), sulfur dioxide and nitrogen dioxide and aerosol optical properties. Estimating and limiting the uncertainties of the final products is of critical importance. High-quality data have a lot of applications and can provide accurate estimations of trends.The dead time is specific for each instrument and improper correction of the raw data for its effect may lead to important errors in the final products. The dead time value may change with time and, with the currently used methodology, it cannot always be determined accurately. For specific cases, such as for low ozone slant columns and high intensities of the direct solar irradiance, the error in the retrieved TOC, due to a 10 ns change in the dead time from its value in use, is found to be up to 5 %. The error in the calculation of UV irradiance can be as high as 12 % near the maximum operational limit of light intensities. While in the existing documentation it is indicated that the dead time effects are important when the error in the used value is greater than 2 ns, we found that for single-monochromator Brewers a 2 ns error in the dead time may lead to errors above the limit of 1 % in the calculation of TOC; thus the tolerance limit should be lowered. A new routine for the determination of the dead time from direct solar irradiance measurements has been created and tested and a validation of the operational algorithm has been performed. Additionally, new methods for the estimation and the validation of the dead time have been developed and are analytically described. Therefore, the present study, in addition to highlighting the importance of the dead time for the processing of Brewer data sets, also provides useful information for their
Hagen, S. C.; Braswell, B. H.; Linder, E.; Frolking, S.; Richardson, A. D.; Hollinger, D. Y.
2006-04-01
We present an uncertainty analysis of gross ecosystem carbon exchange (GEE) estimates derived from 7 years of continuous eddy covariance measurements of forest-atmosphere CO2 fluxes at Howland Forest, Maine, USA. These data, which have high temporal resolution, can be used to validate process modeling analyses, remote sensing assessments, and field surveys. However, separation of tower-based net ecosystem exchange (NEE) into its components (respiration losses and photosynthetic uptake) requires at least one application of a model, which is usually a regression model fitted to nighttime data and extrapolated for all daytime intervals. In addition, the existence of a significant amount of missing data in eddy flux time series requires a model for daytime NEE as well. Statistical approaches for analytically specifying prediction intervals associated with a regression require, among other things, constant variance of the data, normally distributed residuals, and linearizable regression models. Because the NEE data do not conform to these criteria, we used a Monte Carlo approach (bootstrapping) to quantify the statistical uncertainty of GEE estimates and present this uncertainty in the form of 90% prediction limits. We explore two examples of regression models for modeling respiration and daytime NEE: (1) a simple, physiologically based model from the literature and (2) a nonlinear regression model based on an artificial neural network. We find that uncertainty at the half-hourly timescale is generally on the order of the observations themselves (i.e., ˜100%) but is much less at annual timescales (˜10%). On the other hand, this small absolute uncertainty is commensurate with the interannual variability in estimated GEE. The largest uncertainty is associated with choice of model type, which raises basic questions about the relative roles of models and data.
Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua
2018-01-01
Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Energy Technology Data Exchange (ETDEWEB)
Habte, Aron M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-12-19
It is essential to apply a traceable and standard approach to determine the uncertainty of solar resource data. Solar resource data are used for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation, and to determine performance guarantees of solar energy conversion systems. These guarantees are based on the available solar resource derived from a measurement station or modeled data set such as the National Solar Radiation Database (NSRDB). Therefore, quantifying the uncertainty of these data sets provides confidence to financiers, developers, and site operators of solar energy conversion systems and ultimately reduces deployment costs. In this study, we implemented the Guide to the Expression of Uncertainty in Measurement (GUM) 1 to quantify the overall uncertainty of the NSRDB data. First, we start with quantifying measurement uncertainty, then we determine each uncertainty statistic of the NSRDB data, and we combine them using the root-sum-of-the-squares method. The statistics were derived by comparing the NSRDB data to the seven measurement stations from the National Oceanic and Atmospheric Administration's Surface Radiation Budget Network, National Renewable Energy Laboratory's Solar Radiation Research Laboratory, and the Atmospheric Radiation Measurement program's Southern Great Plains Central Facility, in Billings, Oklahoma. The evaluation was conducted for hourly values, daily totals, monthly mean daily totals, and annual mean monthly mean daily totals. Varying time averages assist to capture the temporal uncertainty of the specific modeled solar resource data required for each phase of a solar energy project; some phases require higher temporal resolution than others. Overall, by including the uncertainty of measurements of solar radiation made at ground stations, bias, and root mean square error, the NSRDB data demonstrated expanded uncertainty of 17 percent - 29 percent on hourly
Zunino, Andrea; Mosegaard, Klaus
2017-04-01
Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.
Uncertainty in global groundwater storage estimates in a Total Groundwater Stress framework
Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Swenson, Sean; Rodell, Matthew
2015-01-01
Abstract Groundwater is a finite resource under continuous external pressures. Current unsustainable groundwater use threatens the resilience of aquifer systems and their ability to provide a long‐term water source. Groundwater storage is considered to be a factor of groundwater resilience, although the extent to which resilience can be maintained has yet to be explored in depth. In this study, we assess the limit of groundwater resilience in the world's largest groundwater systems with remote sensing observations. The Total Groundwater Stress (TGS) ratio, defined as the ratio of total storage to the groundwater depletion rate, is used to explore the timescales to depletion in the world's largest aquifer systems and associated groundwater buffer capacity. We find that the current state of knowledge of large‐scale groundwater storage has uncertainty ranges across orders of magnitude that severely limit the characterization of resilience in the study aquifers. Additionally, we show that groundwater availability, traditionally defined as recharge and redefined in this study as total storage, can alter the systems that are considered to be stressed versus unstressed. We find that remote sensing observations from NASA's Gravity Recovery and Climate Experiment can assist in providing such information at the scale of a whole aquifer. For example, we demonstrate that a groundwater depletion rate in the Northwest Sahara Aquifer System of 2.69 ± 0.8 km3/yr would result in the aquifer being depleted to 90% of its total storage in as few as 50 years given an initial storage estimate of 70 km3. PMID:26900184
Energy Technology Data Exchange (ETDEWEB)
Kim, Joo Yeon; Lee, Seung Hyun; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)
2016-06-15
Any real application of Bayesian inference must acknowledge that both prior distribution and likelihood function have only been specified as more or less convenient approximations to whatever the analyzer's true belief might be. If the inferences from the Bayesian analysis are to be trusted, it is important to determine that they are robust to such variations of prior and likelihood as might also be consistent with the analyzer's stated beliefs. The robust Bayesian inference was applied to atmospheric dispersion assessment using Gaussian plume model. The scopes of contaminations were specified as the uncertainties of distribution type and parametric variability. The probabilistic distribution of model parameters was assumed to be contaminated as the symmetric unimodal and unimodal distributions. The distribution of the sector-averaged relative concentrations was then calculated by applying the contaminated priors to the model parameters. The sector-averaged concentrations for stability class were compared by applying the symmetric unimodal and unimodal priors, respectively, as the contaminated one based on the class of ε-contamination. Though ε was assumed as 10%, the medians reflecting the symmetric unimodal priors were nearly approximated within 10% compared with ones reflecting the plausible ones. However, the medians reflecting the unimodal priors were approximated within 20% for a few downwind distances compared with ones reflecting the plausible ones. The robustness has been answered by estimating how the results of the Bayesian inferences are robust to reasonable variations of the plausible priors. From these robust inferences, it is reasonable to apply the symmetric unimodal priors for analyzing the robustness of the Bayesian inferences.
Kim, Hojin; Chen, Josephine; Phillips, Justin; Pukala, Jason; Yom, Sue S; Kirby, Neil
2017-01-01
Deformable image registration is a powerful tool for mapping information, such as radiation therapy dose calculations, from one computed tomography image to another. However, deformable image registration is susceptible to mapping errors. Recently, an automated deformable image registration evaluation of confidence tool was proposed to predict voxel-specific deformable image registration dose mapping errors on a patient-by-patient basis. The purpose of this work is to conduct an extensive analysis of automated deformable image registration evaluation of confidence tool to show its effectiveness in estimating dose mapping errors. The proposed format of automated deformable image registration evaluation of confidence tool utilizes 4 simulated patient deformations (3 B-spline-based deformations and 1 rigid transformation) to predict the uncertainty in a deformable image registration algorithm's performance. This workflow is validated for 2 DIR algorithms (B-spline multipass from Velocity and Plastimatch) with 1 physical and 11 virtual phantoms, which have known ground-truth deformations, and with 3 pairs of real patient lung images, which have several hundred identified landmarks. The true dose mapping error distributions closely followed the Student t distributions predicted by automated deformable image registration evaluation of confidence tool for the validation tests: on average, the automated deformable image registration evaluation of confidence tool-produced confidence levels of 50%, 68%, and 95% contained 48.8%, 66.3%, and 93.8% and 50.1%, 67.6%, and 93.8% of the actual errors from Velocity and Plastimatch, respectively. Despite the sparsity of landmark points, the observed error distribution from the 3 lung patient data sets also followed the expected error distribution. The dose error distributions from automated deformable image registration evaluation of confidence tool also demonstrate good resemblance to the true dose error distributions. Automated
Cummins, P. R.; Benavente, R. F.; Dettmer, J.; Williamson, A.
2016-12-01
Rapid estimation of the slip distribution for large earthquakes can be useful for the early phases of emergency response, in rapid impact assessment and tsunami early warning. Model parameter uncertainties can be crucial for meaningful interpretation of such slip models, but they are often ignored. However, estimation of uncertainty in linear finite fault inversion is difficult because of the positivity constraints that are almost always applied. We have shown in previous work that positivity can be realized by imposing a prior such that the logs of each subfault scalar moment are smoothly distributed on the fault surface, and each scalar moment is intrinsically non-negative while the posterior PDF can still be approximated as Gaussian. The inversion is nonlinear, but we showed that the most probable solution can be found by iterative methods that are not computationally demanding. In addition, the posterior covariance matrix (which provides uncertainties) can be estimated from the most probable solution, using an analytic expression for the Hessian of the cost function. We have studied this approach previously for synthetic W-phase data and showed that a first order estimation of the uncertainty in the slip model can be obtained.Here we apply this method to seismic W-phase recorded following the 2015, Mw 8.3 Illapel earthquake. Our results show a slip distrubtion with maximum slip near the subduction zone trench axis, and having uncertainties that scale roughly with the slip value. We also consider application of this method to multiple data types: seismic W-phase, geodetic, and tsunami.
Ershadi, Ali
2013-05-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.
Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David; Arkle, Robert; McIlroy, Susan
2017-01-01
Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter
National Aeronautics and Space Administration — This article discusses several aspects of uncertainty represen- tation and management for model-based prognostics method- ologies based on our experience with Kalman...
Directory of Open Access Journals (Sweden)
Akeem O. Arinkoola
2015-01-01
Full Text Available The purpose of this paper is to examine various DoE methods for uncertainty quantification of production forecast during reservoir management. Considering all uncertainties for analysis can be time consuming and expensive. Uncertainty screening using experimental design methods helps reducing number of parameters to manageable sizes. However, adoption of various methods is more often based on experimenter discretions or company practices. This is mostly done with no or little attention been paid to the risks associated with decisions that emanated from that exercise. The consequence is the underperformance of the project when compared with the actual value of the project. This study presents the analysis of the three families of designs used for screening and four DoE methods used for response surface modeling during uncertainty analysis. The screening methods (sensitivity by one factor at-a-time, fractional experiment, and Plackett-Burman design were critically examined and analyzed using numerical flow simulation. The modeling methods (Box-Behnken, central composite, D-optima, and full factorial were programmed and analyzed for capabilities to reproduce actual forecast figures. The best method was selected for the case study and recommendations were made as to the best practice in selecting various DoE methods for similar applications.
Directory of Open Access Journals (Sweden)
Jalid Abdelilah
2016-01-01
Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2016-08-31
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.
Directory of Open Access Journals (Sweden)
Harry Budiman
2017-10-01
Full Text Available In this study, method validation and uncertainty estimation for the measurement of trace amounts gas impurities such as carbon monoxide (CO, methane (CH4, and carbon dioxide (CO2 using gas chromatography flame ionization detector with methanizer (GC-FID-methanizer are reported. The method validation was performed by investigating the following performance parameters such as selectivity, limit of detection (LOD, limit of quantification (LOQ, precision, linearity, accuracy, and robustness. The measurement uncertainty to indicate the degree of confidence of the analytical results was estimated by using a bottom up approach. The results reveals that the method possess good repeatability (% relative standard deviation RSD < 1 % and intermediate precision (RSD % < 5 % properties for the measurement of trace level CO, CH4, and CO2. No bias was found for the validated method. The linearity of the method was found to be remarkable with correlation coefficient (R2 higher than 0.995 for all target analytes. In addition, the measurement uncertainty of the CO, and CO2 in high purity helium (He gas sample measured using the validated method were found to be 0.08 µmol∙mol-1, and 0.11 µmol∙mol-1, respectively, at 95 % of confidence level. No measurement uncertainty was obtained for CH4 in high purity gas sample due to its concentration was below the GC-FID-methanizer detection level. In conclusion, the GC-FID-methanizer under experimental condition of this study is reliable and fit for the measurement of trace levels of CO, CH4 and CO2 in high purity gas samples.
Yen, H.; Arabi, M.; Records, R.
2012-12-01
The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and
Briggs, Andrew H; Weinstein, Milton C; Fenwick, Elisabeth A L; Karnon, Jonathan; Sculpher, Mark J; Paltiel, A David
2012-01-01
A model's purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value-of-information analysis. The article also makes extensive recommendations around the reporting of uncertainty, both in terms of deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis.
Characterization of uncertainty in Bayesian estimation using sequential Monte Carlo methods
Aoki, E.H.
2013-01-01
In estimation problems, accuracy of the estimates of the quantities of interest cannot be taken for granted. This means that estimation errors are expected, and a good estimation algorithm should be able not only to compute estimates that are optimal in some sense, but also provide meaningful
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Obaton, A.-F.; Lebenberg, J.; Fischer, N.; Guimier, S.; Dubard, J.
2007-04-01
The measurement uncertainty of the spectral irradiance of an UV lamp is computed by using the law of propagation of uncertainty (LPU) as described in the 'Guide to the Expression of Uncertainty in Measurement' (GUM), considering only a first-order Taylor series approximation. Since the spectral irradiance model displays a non-linear feature and since an asymmetric probability density function (PDF) is assigned to some input quantities, the usage of another process was required to validate the LPU method. The propagation of distributions using Monte Carlo (MC) simulations, as depicted in the supplement of the GUM (GUM-S1), was found to be a relevant alternative solution. The validation of the LPU method by the MC method is discussed with regard to PDF choices, and the benefit of the MC method over the LPU method is illustrated.
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
Experimental FSO network availability estimation using interactive fog condition monitoring
Turán, Ján.; Ovseník, Łuboš
2016-12-01
Free Space Optics (FSO) is a license free Line of Sight (LOS) telecommunication technology which offers full duplex connectivity. FSO uses infrared beams of light to provide optical broadband connection and it can be installed literally in a few hours. Data rates go through from several hundreds of Mb/s to several Gb/s and range is from several 100 m up to several km. FSO link advantages: Easy connection establishment, License free communication, No excavation are needed, Highly secure and safe, Allows through window connectivity and single customer service and Compliments fiber by accelerating the first and last mile. FSO link disadvantages: Transmission media is air, Weather and climate dependence, Attenuation due to rain, snow and fog, Scattering of laser beam, Absorption of laser beam, Building motion and Air pollution. In this paper FSO availability evaluation is based on long term measured data from Fog sensor developed and installed at TUKE experimental FSO network in TUKE campus, Košice, Slovakia. Our FSO experimental network has three links with different physical distances between each FSO heads. Weather conditions have a tremendous impact on FSO operation in terms of FSO availability. FSO link availability is the percentage of time over a year that the FSO link will be operational. It is necessary to evaluate the climate and weather at the actual geographical location where FSO link is going to be mounted. It is important to determine the impact of a light scattering, absorption, turbulence and receiving optical power at the particular FSO link. Visibility has one of the most critical influences on the quality of an FSO optical transmission channel. FSO link availability is usually estimated using visibility information collected from nearby airport weather stations. Raw data from fog sensor (Fog Density, Relative Humidity, Temperature measured at each ms) are collected and processed by FSO Simulator software package developed at our Department. Based
NIS method for uncertainty estimation of airborne sound insulation measurement in field
Directory of Open Access Journals (Sweden)
El-Basheer Tarek M.
2017-01-01
Full Text Available In structures, airborne sound insulation is utilized to characterize the acoustic nature of barriers between rooms. However, the assessment of sound insulation index is once in a while troublesome or indeed, even questionable, both in field and laboratory measurements, notwithstanding the way that there are some unified measurement methodology indicated in the ISO 140 series standards. There are issues with the reproducibility and repeatability of the measurement results. A few troubles might be brought on by non-diffuse acoustic fields, non-uniform reverberation time, or blunders of the reverberation time measurements. Some minor issues are additionally postured by flanking transmission. In this paper, investigation of the uncertainties of the above specified measurement parts and their impact on the consolidated uncertainty in 1/3-octave frequency band. The total measurement uncertainty model contributes several different partial uncertainties, which are evaluated by the method of type A or type B. Also, the determination of the sound reduction index decided by ISO 140-4 has been performed.
Accounting for respondent uncertainty to improve willingness-to-pay estimates
Rebecca Moore; Richard C. Bishop; Bill Provencher; Patricia A. Champ
2010-01-01
In this paper, we develop an econometric model of willingness to pay (WTP) that integrates data on respondent uncertainty regarding their own WTP. The integration is utility consistent, there is no recoding of variables, and no need to calibrate the contingent responses to actual payment data, so the approach can "stand alone." In an application to a...
Optimized clustering estimators for BAO measurements accounting for significant redshift uncertainty
Ross, Ashley J.; Banik, Nilanjan; Avila, Santiago; Percival, Will J.; Dodelson, Scott; Garcia-Bellido, Juan; Crocce, Martin; Elvin-Poole, Jack; Giannantonio, Tommaso; Manera, Marc; Sevilla-Noarbe, Ignacio
2017-12-01
We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the baryon acoustic oscillation (BAO) information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line of sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertainty σz ≥ 0.02(1 + z), we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for σz ≥ 0.02(1 + z). For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations (combining two separate sets) of galaxy simulations mimicking the Dark Energy Survey Year 1 (DES Y1) sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.
Optimized Clustering Estimators for BAO Measurements Accounting for Significant Redshift Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ross, Ashley J. [Portsmouth U., ICG; Banik, Nilanjan [Fermilab; Avila, Santiago [Madrid, IFT; Percival, Will J. [Portsmouth U., ICG; Dodelson, Scott [Fermilab; Garcia-Bellido, Juan [Madrid, IFT; Crocce, Martin [ICE, Bellaterra; Elvin-Poole, Jack [Jodrell Bank; Giannantonio, Tommaso [Cambridge U., KICC; Manera, Marc [Cambridge U., DAMTP; Sevilla-Noarbe, Ignacio [Madrid, CIEMAT
2017-05-15
We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the BAO information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line-of-sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertainty $\\sigma_z \\geq 0.02(1+z)$ we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for $\\sigma_z \\geq 0.02(1+z)$. For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations of galaxy simulations mimicking the Dark Energy Survey Year 1 sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Vander Haegen, M; Etienne, A M; Piette, C
2017-03-01
Studies in pediatric oncology describe a relative good quality of life in child cancer survivor. However, few studies were interested in the parents of a child cancer survivor. 61 parents are recruited in the Belgian hospitals. Three groups of parents are constituted : the parents whose child is in 4 years of survivorship (group 1), in 5 years of survivorship (group 2) and in 6 years of survivorship (group 3). Clinical scales and a Stroop emotion task are administered. Parents (of the 3 groups) present a low tolerance of uncertainty, have excessive worries about the evolution of the health of their child, and suffer from anxious symptoms. The Stroop emotion tasks reveals a cognitive bias of the attention in favour of threatening stimuli. The study highlights the importance to detect parents who are intolerant of uncertainty at the cancer diagnosis stage and their continuous psychological follow-up once the treatments are ended.
A linear programming approach to characterizing norm bounded uncertainty from experimental data
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S
2015-07-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling
Estimates of Uncertainties in Analysis of Positron Lifetime Spectra for Metals
DEFF Research Database (Denmark)
Eldrup, Morten Mostgaard; Huang, Y. M.; McKee, B. T. A.
1978-01-01
by excluding the peak regions of the spectra from the analysis. The influence of using incorrect source-surface components in the analysis may on the other hand be reduced by including the peak regions of the spectra. A main conclusion of the work is that extreme caution should be exercised to avoid......The effects of uncertainties and errors in various constraints used in the analysis of multi-component life-time spectra of positrons annihilating in metals containing defects have been investigated in detail using computer simulated decay spectra and subsequent analysis. It is found...... that the errors in the fitted values of the main components lifetimes and intensities introduced from incorrect values of the instrumental resolution function and of the source-surface components can easily exceed the statistical uncertainties. The effect of an incorrect resolution function may be reduced...
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
Kaminski, Thomas; Pinty, Bernard; Voßbeck, Michael; Lopatka, Maciej; Gobron, Nadine; Robustelli, Monica
2017-05-01
Earth observation (EO) land surface products have been demonstrated to provide a constraint on the terrestrial carbon cycle that is complementary to the record of atmospheric carbon dioxide. We present the Joint Research Centre Two-stream Inversion Package (JRC-TIP) for retrieval of variables characterising the state of the vegetation-soil system. The system provides a set of land surface variables that satisfy all requirements for assimilation into the land component of climate and numerical weather prediction models. Being based on a 1-D representation of the radiative transfer within the canopy-soil system, such as those used in the land surface components of advanced global models, the JRC-TIP products are not only physically consistent internally, but they also achieve a high degree of consistency with these global models. Furthermore, the products are provided with full uncertainty information. We describe how these uncertainties are derived in a fully traceable manner without any hidden assumptions from the input observations, which are typically broadband white sky albedo products. Our discussion of the product uncertainty ranges, including the uncertainty reduction, highlights the central role of the leaf area index, which describes the density of the canopy. We explain the generation of products aggregated to coarser spatial resolution than that of the native albedo input and describe various approaches to the validation of JRC-TIP products, including the comparison against in situ observations. We present a JRC-TIP processing system that satisfies all operational requirements and explain how it delivers stable climate data records. Since many aspects of JRC-TIP are generic, the package can serve as an example of a state-of-the-art system for retrieval of EO products, and this contribution can help the user to understand advantages and limitations of such products.
Till, John E; Beck, Harold L; Aanenson, Jill W; Grogan, Helen A; Mohler, H Justin; Mohler, S Shawn; Voillequé, Paul G
2014-05-01
Methods were developed to calculate individual estimates of exposure and dose with associated uncertainties for a sub-cohort (1,857) of 115,329 military veterans who participated in at least one of seven series of atmospheric nuclear weapons tests or the TRINITY shot carried out by the United States. The tests were conducted at the Pacific Proving Grounds and the Nevada Test Site. Dose estimates to specific organs will be used in an epidemiological study to investigate leukemia and male breast cancer. Previous doses had been estimated for the purpose of compensation and were generally high-sided to favor the veteran's claim for compensation in accordance with public law. Recent efforts by the U.S. Department of Defense (DOD) to digitize the historical records supporting the veterans' compensation assessments make it possible to calculate doses and associated uncertainties. Our approach builds upon available film badge dosimetry and other measurement data recorded at the time of the tests and incorporates detailed scenarios of exposure for each veteran based on personal, unit, and other available historical records. Film badge results were available for approximately 25% of the individuals, and these results assisted greatly in reconstructing doses to unbadged persons and in developing distributions of dose among military units. This article presents the methodology developed to estimate doses for selected cancer cases and a 1% random sample of the total cohort of veterans under study.
Shyu, Conrad; Ytreberg, F Marty
2009-11-15
This report presents the application of polynomial regression for estimating free energy differences using thermodynamic integration data, i.e., slope of free energy with respect to the switching variable lambda. We employ linear regression to construct a polynomial that optimally fits the thermodynamic integration data, and thus reduces the bias and uncertainty of the resulting free energy estimate. Two test systems with analytical solutions were used to verify the accuracy and precision of the approach. Our results suggest that use of regression with high degree of polynomials provides the most accurate free energy difference estimates, but often with slightly larger uncertainty, compared to commonly used quadrature techniques. High degree polynomials possess the flexibility to closely fit the thermodynamic integration data but are often sensitive to small changes in the data points. Thus, we also used Chebyshev nodes to guide in the selection of nonequidistant lambda values for use in thermodynamic integration. We conclude that polynomial regression with nonequidistant lambda values delivers the most accurate and precise free energy estimates for thermodynamic integration data for the systems considered here. Software and documentation is available at http://www.phys.uidaho.edu/ytreberg/software. 2009 Wiley Periodicals, Inc.
Xia, Youlong; Sen, Mrinal K.; Jackson, Charles S.; Stoffa, Paul L.
2004-10-01
This study evaluates the ability of Bayesian stochastic inversion (BSI) and multicriteria (MC) methods to search for the optimal parameter sets of the Chameleon Surface Model (CHASM) using prescribed forcing to simulate observed sensible and latent heat fluxes from seven measurement sites representative of six biomes including temperate coniferous forests, tropical forests, temperate and tropical grasslands, temperate crops, and semiarid grasslands. Calibration results with the BSI and MC show that estimated optimal values are very similar for the important parameters that are specific to the CHASM model. The model simulations based on estimated optimal parameter sets perform much better than the default parameter sets. Cross-validations for two tropical forest sites show that the calibrated parameters for one site can be transferred to another site within the same biome. The uncertainties of optimal parameters are obtained through BSI, which estimates a multidimensional posterior probability density function (PPD). Marginal PPD analyses show that nonoptimal choices of stomatal resistance would contribute most to model simulation errors at all sites, followed by ground and vegetation roughness length at six of seven sites. The impact of initial root-zone soil moisture and nonmosaic approach on estimation of optimal parameters and their uncertainties is discussed.
Directory of Open Access Journals (Sweden)
Joseph Leedale
2016-03-01
Full Text Available The effect of climate change on the spatiotemporal dynamics of malaria transmission is studied using an unprecedented ensemble of climate projections, employing three diverse bias correction and downscaling techniques, in order to partially account for uncertainty in climate- driven malaria projections. These large climate ensembles drive two dynamical and spatially explicit epidemiological malaria models to provide future hazard projections for the focus region of eastern Africa. While the two malaria models produce very distinct transmission patterns for the recent climate, their response to future climate change is similar in terms of sign and spatial distribution, with malaria transmission moving to higher altitudes in the East African Community (EAC region, while transmission reduces in lowland, marginal transmission zones such as South Sudan. The climate model ensemble generally projects warmer and wetter conditions over EAC. The simulated malaria response appears to be driven by temperature rather than precipitation effects. This reduces the uncertainty due to the climate models, as precipitation trends in tropical regions are very diverse, projecting both drier and wetter conditions with the current state-of-the-art climate model ensemble. The magnitude of the projected changes differed considerably between the two dynamical malaria models, with one much more sensitive to climate change, highlighting that uncertainty in the malaria projections is also associated with the disease modelling approach.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Uncertainty estimation in one-dimensional heat transport model for heterogeneous porous medium.
Chang, Ching-Min; Yeh, Hund-Der
2014-01-01
In many practical applications, the rates for ground water recharge and discharge are determined based on the analytical solution developed by Bredehoeft and Papadopulos (1965) to the one-dimensional steady-state heat transport equation. Groundwater flow processes are affected by the heterogeneity of subsurface systems; yet, the details of which cannot be anticipated precisely. There exists a great deal of uncertainty (variability) associated with the application of Bredehoeft and Papadopulos' solution (1965) to the field-scale heat transport problems. However, the quantification of uncertainty involved in such application has so far not been addressed, which is the objective of this wok. In addition, the influence of the statistical properties of log hydraulic conductivity field on the variability in temperature field in a heterogeneous aquifer is also investigated. The results of the analysis demonstrate that the variability (or uncertainty) in the temperature field increases with the correlation scale of the log hydraulic conductivity covariance function and the variability of temperature field also depends positively on the position. © 2013, National Ground Water Association.
Leedale, Joseph; Tompkins, Adrian M; Caminade, Cyril; Jones, Anne E; Nikulin, Grigory; Morse, Andrew P
2016-03-31
The effect of climate change on the spatiotemporal dynamics of malaria transmission is studied using an unprecedented ensemble of climate projections, employing three diverse bias correction and downscaling techniques, in order to partially account for uncertainty in climate- driven malaria projections. These large climate ensembles drive two dynamical and spatially explicit epidemiological malaria models to provide future hazard projections for the focus region of eastern Africa. While the two malaria models produce very distinct transmission patterns for the recent climate, their response to future climate change is similar in terms of sign and spatial distribution, with malaria transmission moving to higher altitudes in the East African Community (EAC) region, while transmission reduces in lowland, marginal transmission zones such as South Sudan. The climate model ensemble generally projects warmer and wetter conditions over EAC. The simulated malaria response appears to be driven by temperature rather than precipitation effects. This reduces the uncertainty due to the climate models, as precipitation trends in tropical regions are very diverse, projecting both drier and wetter conditions with the current state-of-the-art climate model ensemble. The magnitude of the projected changes differed considerably between the two dynamical malaria models, with one much more sensitive to climate change, highlighting that uncertainty in the malaria projections is also associated with the disease modelling approach.
Directory of Open Access Journals (Sweden)
N. N. Nekrasova
2016-01-01
Full Text Available Summary. This article proposed to estimate the technological parameters of mining and metallurgical industry (iron ore stocks, given the fuzzy set values in conditions of uncertainty using the balance sheet and industrial methods of calculation of reserves of ore. Due to the fact that the modeling of the processes of extraction of ore is associated with parameters of the equations that contain variables with different nature of uncertainty, it is better to provide all the information on a single formal language of fuzzy set theory. Thus, the proposed model calculation and evaluation of reserves of iron ore by different methods in conditions of uncertainty geological information on the basis of the theory of fuzzy sets. In this case the undefined values are interpreted as intentionally "fuzzy", since this approach largely corresponds to the real industrial situation than the interpretation of such quantities in terms of random. Taken into account the fact that the application of the probabilistic approach leads to the identification of uncertainty with randomness, but in practice, the basic nature of uncertainty in the calculation of reserves of iron ore is unclear. Under the proposed approach, each fuzzy parameter is a corresponding membership function, to determine which proposed using a General algorithm, as the result of algebraic operations on arbitrary membership function of the inverse numerical method. Because of the existence of many models describing the same production process in different methods (for example, the balance model or industrial model and under different assumptions proposed to coordinate such models on the basis of the model of aggregation of heterogeneous information. For matching this kind of information, its generalization and adjustment of the outcome parameters, it is expedient to use the apparatus of fuzzy set theory that allows to obtain quantitative characteristics of imprecisely specified parameters and make the
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
Miñarro, Marta Doval; Castell-Balaguer, Nuria; Téllez, Laura; Mantilla, Enrique
2012-10-01
Observation-based methods are useful tools to explore the sensitivity of ozone concentrations to precursor controls. With the aim of assessing the ozone precursor sensitivity in two locations: Paterna (suburban) and Villar del Arzobispo (rural) of the Turia river basin in the east of Spain, the photochemical indicator O(3)/NO(y) and the Extent-of-Reaction (EOR) parameter have been calculated from field measurements. In Paterna, the O(3)/NO(y) ratio varied from 0 to 13 with an average value of 5.1 (SD 3.2), whereas the averaged value for the EOR was 0.43 (SD 0.14). In Villar del Arzobispo, the O(3)/NO(y) ratio changed from 5 to 30 with a mean value of 13.6 (SD 4.7) and the EOR gave an averaged value of 0.72 (SD 0.11). The results show two different patterns of ozone production as a function of the location. The suburban area shows a VOC-sensitive regime whereas the rural one shows a transition regime close to NO(x)-sensitive conditions. No seasonal differences in these regimes are observed along the monitoring campaigns. Finally, an analysis of the influence of the measurement quality of NO(y), NO(x) and O(3) on the uncertainty of the O(3)/NO(y) ratio and the EOR was performed showing that the uncertainty of O(3)/NO(y) is not dependent on either its value or the individual values of O(3) and NO(y) but just on the quality of O(3) and NO(y) measurements. The maximum uncertainty is 26% as long as the combined uncertainties of O(3) and NO(y) remain below the 7.5%. The case of the EOR is different and its uncertainty depends on both the value of the EOR parameter and the individual concentration values of NO(y) and NO(x). The uncertainty of the EOR estimation can be very high (>200%) if the combined uncertainties of both NO(y) and NO(x) are high (>7.5%), or especially, if u(NO(y)) and u(NO(x)) differ considerably from each other (>3.5%). Copyright © 2012 Elsevier Ltd. All rights reserved.
van Berkel, M.; Zwart, Heiko J.; Hogeweij, G.M.D.; van der Steen, G.; van den Brand, H.; de Baar, M.R.
2014-01-01
In this paper, the estimation of the thermal diffusivity from perturbative experiments in fusion plasmas is discussed. The measurements used to estimate the thermal diffusivity suffer from stochastic noise. Accurate estimation of the thermal diffusivity should take this into account. It will be
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Schwarz, Lisa K; Villegas-Amtmann, Stella; Beltran, Roxanne S; Costa, Daniel P; Goetsch, Chandra; Hückstädt, Luis; Maresh, Jennifer L; Peterson, Sarah H
2015-01-01
Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp) can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones). The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris) comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may also be
Directory of Open Access Journals (Sweden)
Lisa K Schwarz
Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may
Skou, Peter B; Berg, Thilo A; Aunsbjerg, Stina D; Thaysen, Dorrit; Rasmussen, Morten A; van den Berg, Frans
2017-03-01
Reuse of process water in dairy ingredient production-and food processing in general-opens the possibility for sustainable water regimes. Membrane filtration processes are an attractive source of process water recovery since the technology is already utilized in the dairy industry and its use is expected to grow considerably. At Arla Foods Ingredients (AFI), permeate from a reverse osmosis polisher filtration unit is sought to be reused as process water, replacing the intake of potable water. However, as for all dairy and food producers, the process water quality must be monitored continuously to ensure food safety. In the present investigation we found urea to be the main organic compound, which potentially could represent a microbiological risk. Near infrared spectroscopy (NIRS) in combination with multivariate modeling has a long-standing reputation as a real-time measurement technology in quality assurance. Urea was quantified Using NIRS and partial least squares regression (PLS) in the concentration range 50-200 ppm (RMSEP = 12 ppm, R(2 )= 0.88) in laboratory settings with potential for on-line application. A drawback of using NIRS together with PLS is that uncertainty estimates are seldom reported but essential to establishing real-time risk assessment. In a multivariate regression setting, sample-specific prediction errors are needed, which complicates the uncertainty estimation. We give a straightforward strategy for implementing an already developed, but seldom used, method for estimating sample-specific prediction uncertainty. We also suggest an improvement. Comparing independent reference analyses with the sample-specific prediction error estimates showed that the method worked on industrial samples when the model was appropriate and unbiased, and was simple to implement.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Directory of Open Access Journals (Sweden)
P. Stier
2013-03-01
Full Text Available Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is −4.47 Wm−2 and the inter-model standard deviation is 0.55 Wm−2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm−2, and the standard deviation increases to 1.01 W−2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption is low, with absolute (relative standard deviations of 0.45 Wm−2 (8% clear-sky and 0.62 Wm−2 (11% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm−2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Stochastic long term modelling of a drainage system with estimation of return period uncertainty
DEFF Research Database (Denmark)
Thorndahl, Søren
2009-01-01
periods of drainage system predictions are based on ranking, but this paper proposes a new methodology for the assessment of return periods. Based on statistics of characteristic rainfall parameters and correlation with drainage system predictions, it is possible to predict return periods more reliably......Long term prediction of maximum water levels and combined sewer overflow (CSO) in drainage systems are associated with large uncertainties. Especially on rainfall inputs, parameters, and assessment of return periods. This paper proposes a Monte Carlo based methodology for stochastic prediction......, and with smaller confidence bands compared to the traditional methodology....
Stochastic Long Term Modelling of a Drainage System with Estimation of Return Period Uncertainty
DEFF Research Database (Denmark)
Thorndahl, Søren
2008-01-01
periods of drainage system predictions are based on ranking, but this paper proposes a new methodology for the assessment of return periods. Based on statistics of characteristic rainfall parameters and correlation with drainage system predictions, it is possible to predict return periods more reliably......Long term prediction of maximum water levels and combined sewer overflow (CSO) in drainage systems are associated with large uncertainties. Especially on rainfall inputs, parameters, and assessment of return periods. This paper proposes a Monte Carlo based methodology for stochastic prediction......, and with smaller confidence bands compared to the traditional methodology....
Prudencio, Ernesto E.
2012-01-01
QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.
Random Forests as a tool for estimating uncertainty at pixel-level in SAR image classification
DEFF Research Database (Denmark)
Loosvelt, Lien; Peters, Jan; Skriver, Henning
2012-01-01
, we introduce Random Forests for the probabilistic mapping of vegetation from high-dimensional remote sensing data and present a comprehensive methodology to assess and analyze classification uncertainty based on the local probabilities of class membership. We apply this method to SAR image data......It is widely acknowledged that model inputs can cause considerable errors in the model output. Since land cover maps obtained from the classification of remote sensing data are frequently used as input to spatially explicit environmental models, it is important to provide information regarding...... be easily assessed when using the Random Forests algorithm....
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision
Directory of Open Access Journals (Sweden)
Vesna Režić Dereani
2010-09-01
Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.
Directory of Open Access Journals (Sweden)
Yunpeng Song
2015-03-01
Full Text Available Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.
Directory of Open Access Journals (Sweden)
Fezzani Amor
2017-01-01
Full Text Available The performance of photovoltaic (PV module is affected by outdoor conditions. Outdoor testing consists installing a module, and collecting electrical performance data and climatic data over a certain period of time. It can also include the study of long-term performance under real work conditions. Tests are operated in URAER located in desert region of Ghardaïa (Algeria characterized by high irradiation and temperature levels. The degradation of PV module with temperature and time exposure to sunlight contributes significantly to the final output from the module, as the output reduces each year. This paper presents a comparative study of different methods to evaluate the degradation of PV module after a long term exposure of more than 12 years in desert region and calculates uncertainties in measuring. Firstly, this evaluation uses three methods: Visual inspection, data given by Solmetric PVA-600 Analyzer translated at Standard Test Condition (STC and based on the investigation results of the translation equations as ICE 60891. Secondly, the degradation rates calculated for all methods. Finally, a comparison between a degradation rates given by Solmetric PVA-600 analyzer, calculated by simulation model and calculated by two methods (ICE 60891 procedures 1, 2. We achieved a detailed uncertainty study in order to improve the procedure and measurement instrument.
Uncertainty in estimated values of forestry project: a case study of ...
African Journals Online (AJOL)
The values of estimates for a direct and taungya plantatiomn at Ago-Owu forest reserve were less sensitive to increase in costs of inputs (seeds, labour, land and capital). It is recommended that since the values of estimates are highly sensitive to increase in discount factors, more effort should be channeled by the ...
Critical headway estimation under uncertainty and non-ideal communication conditions
Kester, L.J.H.M.; Willigen, W. van; Jongh, J.F.C.M de
2014-01-01
This article proposes a safety check extension to Adaptive Cruise Control systems where the critical headway time is estimated in real-time. This critical headway time estimate enables automated reaction to crisis circumstances such as when a preceding vehicle performs an emergency brake. We discuss
Over, T.M.; Duncker, J.J.; Gonzalez-Castro, J. A.; ,
2004-01-01
Estimates of uncertainty of discharge at time scales from 5 minutes to 1 year were obtained for two index-velocity gages on the Chicago Sanitary and Ship Canal (CSSC), Ill., instrumented with acoustic velocity meters (AVMs). The velocity measurements obtained from the AVMs are corrected to a mean channel velocity by use of an index velocity rating (IVR). The IVR is a regression-derived relation between the AVM velocity estimates and those obtained using acoustic Doppler current profilers (ADCPs). The uncertainty estimation method is based on the first-order variance method, but the AVM velocity error is estimated from an empirical perspective, using the statistics of the IVR regression. Some uncertainty exists regarding whether to include the standard error of the IVR regression (????2) in the discharge uncertainty. At the 5-minute time scale when ?? ??2 is included, it has the dominant contribution to the discharge uncertainty, and the discharge uncertainty (expressed as the standard deviation of the discharge estimate) is about 5 m3/s at one gage and 8 m3/s at the other, independent of discharge. When ????2 is not included, the discharge uncertainty at the 5-minute time scale is much smaller (about 0.5 m3/s) and depends more strongly on discharge. For time scales one day or greater and when ????2 is not included, the uncertainty of the IVR parameters dominates the discharge uncertainty. The value of the discharge uncertainty is about 0.4 m3/s for one gage and 0.5 m3/s for the other gage at long time scales.
Directory of Open Access Journals (Sweden)
Herbert Hoi
Full Text Available In many socially monogamous species, both sexes seek copulation outside the pair bond in order to increase their reproductive success. In response, males adopt counter-strategies to combat the risk of losing paternity. However, no study so far has tried to experimentally prove the function of behaviour for paternity assurance. Introducing a potential extra-pair partner during the female fertile period provides a standardised method to examine how pair members respond immediately (e.g. increase mate guarding or copulation frequency or long term (e.g. later parental investment and paternity uncertainty. In this study on a socially monogamous passerine species, we experimentally confronted pairs of reed warblers with a conspecific male (caged male simulating an intruder during egg-laying. Our results revealed that occurrence of an intruder during that period triggered aggression against the intruder, depending on the presence of the female. The male territory owner also attacked the female partner to drive her away from the intruder. Thus territory defence in reed warblers also serves to protect paternity. The increase in paternity uncertainty did not affect later paternal investment. Paternal investment was also independent of the actual paternity losses. In females, the experiment elicited both, immediate and long-term responses. E.g. female copulation solicitations during the intruder experiment were only observed for females which later turned out to have extra-pair chicks in their nest. In relation to long term response females faced with an intruder invested later less in offspring feeding, and had less extra-pair chicks in their nests. Extra-pair paternity also seems to be affected by female quality (body size. In conclusion female reed warblers seem to seek extra-pair fertilizations but we could demonstrate that males adopt paternity assurance tactics which seems to efficiently help them to reduce paternity uncertainty.
The Uncertainty of Biomass Estimates from Modeled ICESat-2 Returns Across a Boreal Forest Gradient
Montesano, P. M.; Rosette, J.; Sun, G.; North, P.; Nelson, R. F.; Dubayah, R. O.; Ranson, K. J.; Kharuk, V.
2014-01-01
The Forest Light (FLIGHT) radiative transfer model was used to examine the uncertainty of vegetation structure measurements from NASA's planned ICESat-2 photon counting light detection and ranging (LiDAR) instrument across a synthetic Larix forest gradient in the taiga-tundra ecotone. The simulations demonstrate how measurements from the planned spaceborne mission, which differ from those of previous LiDAR systems, may perform across a boreal forest to non-forest structure gradient in globally important ecological region of northern Siberia. We used a modified version of FLIGHT to simulate the acquisition parameters of ICESat-2. Modeled returns were analyzed from collections of sequential footprints along LiDAR tracks (link-scales) of lengths ranging from 20 m-90 m. These link-scales traversed synthetic forest stands that were initialized with parameters drawn from field surveys in Siberian Larix forests. LiDAR returns from vegetation were compiled for 100 simulated LiDAR collections for each 10 Mg · ha(exp -1) interval in the 0-100 Mg · ha(exp -1) above-ground biomass density (AGB) forest gradient. Canopy height metrics were computed and AGB was inferred from empirical models. The root mean square error (RMSE) and RMSE uncertainty associated with the distribution of inferred AGB within each AGB interval across the gradient was examined. Simulation results of the bright daylight and low vegetation reflectivity conditions for collecting photon counting LiDAR with no topographic relief show that 1-2 photons are returned for 79%-88% of LiDAR shots. Signal photons account for approximately 67% of all LiDAR returns, while approximately 50% of shots result in 1 signal photon returned. The proportion of these signal photon returns do not differ significantly (p greater than 0.05) for AGB intervals greater than 20 Mg · ha(exp -1). The 50m link-scale approximates the finest horizontal resolution (length) at which photon counting LiDAR collection provides strong model
Energy Technology Data Exchange (ETDEWEB)
Yamaji, Bogdan; Aszodi, Attila [Budapest University of Technology and Economics (Hungary). Inst. of Nuclear Techniques
2016-09-15
In the paper measurement results from the experimental modelling of a molten salt reactor concept will be presented along with detailed uncertainty analysis of the experimental system. Non-intrusive flow measurements are carried out on the scaled and segmented mock-up of a homogeneous, single region molten salt fast reactor concept. Uncertainty assessment of the particle image velocimetry (PIV) measurement system applied with the scaled and segmented model is presented in detail. The analysis covers the error sources of the measurement system (laser, recording camera, etc.) and the specific conditions (de-warping of measurement planes) originating in the geometry of the investigated domain. Effect of sample size in the ensemble averaged PIV measurements is discussed as well. An additional two-loop-operation mode is also presented and the analysis of the measurement results confirm that without enhancement nominal and other operation conditions will lead to strong unfavourable separation in the core flow. It implies that use of internal flow distribution structures will be necessary for the optimisation of the core coolant flow. Preliminary CFD calculations are presented to help the design of a perforated plate located above the inlet region. The purpose of the perforated plate is to reduce recirculation near the cylindrical wall and enhance the uniformity of the core flow distribution.
Marks, Harry M; Tohamy, Soumaya M; Tsui, Flora
2013-06-01
Because of numerous reported foodborne illness cases due to non-O157:H7 Shiga toxin-producing Escherichia coli (STEC) bacteria in the United States and elsewhere, interest in requiring better control of these pathogens in the food supply has increased. Successfully putting forth regulations depends upon cost-benefit analyses. Policy decisions often depend upon an evaluation of the uncertainty of the estimates used in such an analysis. This article presents an approach for estimating the uncertainties of estimated expected cost per illness and total annual costs of non-O157 STEC-related illnesses due to uncertainties associated with (i) recent FoodNet data and (ii) methodology proposed by Scallan et al. in 2011. The FoodNet data categorize illnesses regarding hospitalization and death. We obtained the illness-category costs from the foodborne illness cost calculator of the U.S. Department of Agriculture, Economic Research Service. Our approach for estimating attendant uncertainties differs from that of Scallan et al. because we used a classical bootstrap procedure for estimating uncertainty of an estimated parameter value (e.g., mean value), reflecting the design of the FoodNet database, whereas the other approach results in an uncertainty distribution that includes an extraneous contribution due to the underlying variability of the distribution of illnesses among different sites. For data covering 2005 through 2010, we estimate that the average cost per illness was about $450, with a 98% credible interval of $230 to $1,000. This estimate and range are based on estimations of about one death and 100 hospitalizations per 34,000 illnesses. Our estimate of the total annual cost is about $51 million, with a 98% credible interval of $19 million to $122 million. The uncertainty distribution for total annual cost is approximated well by a lognormal distribution, with mean and standard deviations for the log-transformed costs of 10.765 and 0.390, respectively.
A PC program for estimating measurement uncertainty for aeronautics test instrumentation
Blumenthal, Philip Z.
1995-01-01
A personal computer program was developed which provides aeronautics and operations engineers at Lewis Research Center with a uniform method to quickly provide values for the uncertainty in test measurements and research results. The software package used for performing the calculations is Mathcad 4.0, a Windows version of a program which provides an interactive user interface for entering values directly into equations with immediate display of results. The error contribution from each component of the system is identified individually in terms of the parameter measured. The final result is given in common units, SI units, and percent of full scale range. The program also lists the specifications for all instrumentation and calibration equipment used for the analysis. It provides a presentation-quality printed output which can be used directly for reports and documents.
DEFF Research Database (Denmark)
Jones, Mark Nicholas; Hukkerikar, Amol; Sin, Gürkan
During the design of a chemical process engineers typically switch from simple (shortcut) calculations to more detailed rigorous models to perform mass and energy balances around unit operations and to design process equipment involved in that process. The choice of the most appropriate...... have shown a significant impact on the reflux ratio of the extractive distillation process. In general, systematic sensitivity analysis should be part of process design efforts and expected to contribute to better-informed and reliable design solutions in chemical industries....... thermodynamic and thermo-physical models is critical to obtain a feasible and operable process design and many guidelines pertaining to this can be found in the literature. But even if appropriate models have been chosen, the user needs to keep in mind that these models contain uncertainties which may propagate...
DEFF Research Database (Denmark)
Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens
2016-01-01
.0% and 0.99 for FP as well as 6.4% and 0.76 for AIT. Moreover, the temperature-dependence of LFL property was studied. A compound specific proportionality constant (KLFL) between LFL and temperature is introduced and an MG GC model to estimate KLFL is developed. Overall the ability to predict flammability......This study presents new group contribution (GC) models for the prediction of Lower and Upper Flammability Limits (LFL and UFL), Flash Point (FP) and Auto Ignition Temperature (AIT) of organic chemicals applying the Marrero/Gani (MG) method. Advanced methods for parameter estimation using robust...... regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95...
Methods and uncertainty estimations of 3-D structural modelling in crystalline rocks: a case study
Schneeberger, Raphael; de La Varga, Miguel; Egli, Daniel; Berger, Alfons; Kober, Florian; Wellmann, Florian; Herwegh, Marco
2017-09-01
Exhumed basement rocks are often dissected by faults, the latter controlling physical parameters such as rock strength, porosity, or permeability. Knowledge on the three-dimensional (3-D) geometry of the fault pattern and its continuation with depth is therefore of paramount importance for applied geology projects (e.g. tunnelling, nuclear waste disposal) in crystalline bedrock. The central Aar massif (Central Switzerland) serves as a study area where we investigate the 3-D geometry of the Alpine fault pattern by means of both surface (fieldwork and remote sensing) and underground ground (mapping of the Grimsel Test Site) information. The fault zone pattern consists of planar steep major faults (kilometre scale) interconnected with secondary relay faults (hectometre scale). Starting with surface data, we present a workflow for structural 3-D modelling of the primary faults based on a comparison of three extrapolation approaches based on (a) field data, (b) Delaunay triangulation, and (c) a best-fitting moment of inertia analysis. The quality of these surface-data-based 3-D models is then tested with respect to the fit of the predictions with the underground appearance of faults. All three extrapolation approaches result in a close fit ( > 10 %) when compared with underground rock laboratory mapping. Subsequently, we performed a statistical interpolation based on Bayesian inference in order to validate and further constrain the uncertainty of the extrapolation approaches. This comparison indicates that fieldwork at the surface is key for accurately constraining the geometry of the fault pattern and enabling a proper extrapolation of major faults towards depth. Considerable uncertainties, however, persist with respect to smaller-sized secondary structures because of their limited spatial extensions and unknown reoccurrence intervals.
Methods and uncertainty estimations of 3-D structural modelling in crystalline rocks: a case study
Directory of Open Access Journals (Sweden)
R. Schneeberger
2017-09-01
Full Text Available Exhumed basement rocks are often dissected by faults, the latter controlling physical parameters such as rock strength, porosity, or permeability. Knowledge on the three-dimensional (3-D geometry of the fault pattern and its continuation with depth is therefore of paramount importance for applied geology projects (e.g. tunnelling, nuclear waste disposal in crystalline bedrock. The central Aar massif (Central Switzerland serves as a study area where we investigate the 3-D geometry of the Alpine fault pattern by means of both surface (fieldwork and remote sensing and underground ground (mapping of the Grimsel Test Site information. The fault zone pattern consists of planar steep major faults (kilometre scale interconnected with secondary relay faults (hectometre scale. Starting with surface data, we present a workflow for structural 3-D modelling of the primary faults based on a comparison of three extrapolation approaches based on (a field data, (b Delaunay triangulation, and (c a best-fitting moment of inertia analysis. The quality of these surface-data-based 3-D models is then tested with respect to the fit of the predictions with the underground appearance of faults. All three extrapolation approaches result in a close fit (> 10 % when compared with underground rock laboratory mapping. Subsequently, we performed a statistical interpolation based on Bayesian inference in order to validate and further constrain the uncertainty of the extrapolation approaches. This comparison indicates that fieldwork at the surface is key for accurately constraining the geometry of the fault pattern and enabling a proper extrapolation of major faults towards depth. Considerable uncertainties, however, persist with respect to smaller-sized secondary structures because of their limited spatial extensions and unknown reoccurrence intervals.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine
2016-02-01
Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the
Chang, Kelly C; Dutta, Sara; Mirams, Gary R; Beattie, Kylie A; Sheng, Jiansong; Tran, Phu N; Wu, Min; Wu, Wendy W; Colatsky, Thomas; Strauss, David G; Li, Zhihua
2017-01-01
The Comprehensive in vitro Proarrhythmia Assay (CiPA) is a global initiative intended to improve drug proarrhythmia risk assessment using a new paradigm of mechanistic assays. Under the CiPA paradigm, the relative risk of drug-induced Torsade de Pointes (TdP) is assessed using an in silico model of the human ventricular action potential (AP) that integrates in vitro pharmacology data from multiple ion channels. Thus, modeling predictions of cardiac risk liability will depend critically on the variability in pharmacology data, and uncertainty quantification (UQ) must comprise an essential component of the in silico assay. This study explores UQ methods that may be incorporated into the CiPA framework. Recently, we proposed a promising in silico TdP risk metric (qNet), which is derived from AP simulations and allows separation of a set of CiPA training compounds into Low, Intermediate, and High TdP risk categories. The purpose of this study was to use UQ to evaluate the robustness of TdP risk separation by qNet. Uncertainty in the model parameters used to describe drug binding and ionic current block was estimated using the non-parametric bootstrap method and a Bayesian inference approach. Uncertainty was then propagated through AP simulations to quantify uncertainty in qNet for each drug. UQ revealed lower uncertainty and more accurate TdP risk stratification by qNet when simulations were run at concentrations below 5× the maximum therapeutic exposure (Cmax). However, when drug effects were extrapolated above 10× Cmax, UQ showed that qNet could no longer clearly separate drugs by TdP risk. This was because for most of the pharmacology data, the amount of current block measured was design considerations that preclude an accurate determination of drug IC50-values in vitro. Thus, we demonstrate that UQ provides valuable information about in silico modeling predictions that can inform future proarrhythmic risk evaluation of drugs under the CiPA paradigm.
Sabbatini, Simone; Fratini, Gerardo; Fidaleo, Marcello; Papale, Dario
2017-04-01
the cumulate fluxes is correlated to its magnitude according to a power function. (v) The number of processing runs can be safely reduced to 32, and in most cases up to 16. Further reductions are possible, especially to roughly quantify the uncertainty, but tend to oversaturate the design, resulting in aliases between factors and interactions and making very difficult to understand their importance. Based on those results, we suggest that the systematic uncertainty of EC measurements from the post-field raw data processing can be estimated with one of the following methods (in order of increasing accuracy): (i) applying a power function to a single value of the yearly cumulate; (ii) from a combination of 4 different processing options: '2D rotations' and 'planar fit' for CR and 'block average' and 'linear detrending' for TR. (iii) Performing a fractional factorial analysis of 32 (16) combinations of different processing options. The increase in operational power of computers allows, and will allow even more in the future, to run more parallel routines in acceptable time.
Statistical Information and Uncertainty: A Critique of Applications in Experimental Psychology
Directory of Open Access Journals (Sweden)
Donald Laming
2010-04-01
Full Text Available This paper presents, first, a formal exploration of the relationships between information (statistically defined, statistical hypothesis testing, the use of hypothesis testing in reverse as an investigative tool, channel capacity in a communication system, uncertainty, the concept of entropy in thermodynamics, and Bayes’ theorem. This exercise brings out the close mathematical interrelationships between different applications of these ideas in diverse areas of psychology. Subsequent illustrative examples are grouped under (a the human operator as an ideal communications channel, (b the human operator as a purely physical system, and (c Bayes’ theorem as an algorithm for combining information from different sources. Some tentative conclusions are drawn about the usefulness of information theory within these different categories. (a The idea of the human operator as an ideal communications channel has long been abandoned, though it provides some lessons that still need to be absorbed today. (b Treating the human operator as a purely physical system provides a platform for the quantitative exploration of many aspects of human performance by analogy with the analysis of other physical systems. (c The use of Bayes’ theorem to calculate the effects of prior probabilities and stimulus frequencies on human performance is probably misconceived, but it is difficult to obtain results precise enough to resolve this question.
Sittig, S.; Vrugt, J. A.; Kasteel, R.; Groeneweg, J.; Vereecken, H.
2011-12-01
Persistent antibiotics in the soil potentially contaminate the groundwater and affect the quality of drinking water. To improve our understanding of antibiotic transport in soils, we performed laboratory transport experiments in soil columns under constant irrigation conditions with repeated applications of chloride and radio-labeled SDZ. The tracers were incorporated in the first centimeter, either with pig manure or with solution. Breakthrough curves and concentration profiles of the parent compound and the main transformation products were measured. The goal is to describe the observed nonlinear and kinetic transport behavior of SDZ. Our analysis starts with synthetic transport data for the given laboratory flow conditions for tracers which exhibit increasingly complex interactions with the solid phase. This first step is necessary to benchmark our inverse modeling approach for ideal situations. Then we analyze the transport behavior using the column experiments in the laboratory. Our analysis uses a Markov chain Monte Carlo sampler (Differential Evolution Adaptive Metropolis algorithm, DREAM) to efficiently search the parameter space of an advective-dispersion model. Sorption of the antibiotics to the soil was described using a model regarding reversible as well as irreversible sorption. This presentation will discuss our initial findings. We will present the data of our laboratory experiments along with an analysis of parameter uncertainty.
Experimental study of geo-acoustic inversion uncertainty due to ocean sound-speed fluctuations.
Siderius, M.; Nielsen, P.L.; Sellschopp, J.; Snellen, M.; Simons, D.G.
2001-01-01
Acoustic data measured in the ocean fluctuate due to the complex time-varying properties of the channel. When measured data are used for model-based, geo-acoustic inversion, how do acoustic fluctuations impact estimates for the seabed properties? In May 1999 SACLANT Undersea Research Center and
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, W.; Tuxen, N.
2010-01-01
for each of the conceptual models considered. The probability distribution of mass discharge is obtained by combining all ensembles via BMA. The method was applied to a trichloroethylene-contaminated site located in northern Copenhagen. Four essentially different conceptual models based on two source zone......The estimation of mass discharges from contaminated sites is valuable when evaluating the potential risk to down-gradient receptors, when assessing the efficiency of a site remediation, or when determining the degree of natural attenuation. Given the many applications of mass discharge estimation...
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Duhamel, Isabelle [Institut de Radioprotection et de Surete Nucleaire; Letang, Eric [Institut de Radioprotection et de Surete Nucleaire
2009-01-01
New TSUNAMI tools of SCALE 6, TSURFER and TSAR, are demonstrated to examine the bias effects of small-worth test materials, relative to reference experiments. TSURFER is a data adjustment bias and bias uncertainty assessment tool, and TSAR computes the sensitivity of the change in reactivity between two systems to the cross-section data common to their calculation. With TSURFER, it is possible to examine biases and bias uncertainties in fine detail. For replacement experiments, the application of TSAR to TSUNAMI-3D sensitivity data for pairs of experiments allows the isolation of sources of bias that could otherwise be obscured by materials with more worth in an individual experiment. The application of TSUNAMI techniques in the design of nine reference experiments for the MIRTE program will allow application of these advanced techniques to data acquired in the experimental series. The validation of all materials in a complex criticality safety application likely requires consolidating information from many different critical experiments. For certain materials, such as structural materials or fission products, only a limited number of critical experiments are available, and the fuel and moderator compositions of the experiments may differ significantly from those of the application. In these cases, it is desirable to extract the computational bias of a specific material from an integral keff measurement and use that information to quantify the bias due to the use of the same material in the application system. Traditional parametric and nonparametric methods are likely to prove poorly suited for such a consolidation of specific data components from a diverse set of experiments. An alternative choice for consolidating specific data from numerous sources is a data adjustment tool, like the ORNL tool TSURFER (Tool for Sensitivity/Uncertainty analysis of Response Functionals using Experimental Results) from SCALE 6.1 However, even with TSURFER, it may be difficult to
Seo, Ye-Won; Kim, Hojin; Yun, Kyung-Sook; Lee, June-Yi; Ha, Kyung-Ja; Moon, Ja-Yeon
2014-11-01
How well the climate models simulate extreme temperature over East Asia and how the extreme indices would change under anthropogenic global warming are investigated. The indices studied include hot days (HD), tropical nights (TN), growing degree days (GDD), and cooling degree days (CDD) in summer and heating degree days (HDD) and frost days (FD) in winter. The representative concentration pathway 4.5 (RCP 4.5) experiments for the period of 2075-2099 are compared with historical simulations for the period of 1979-2005 from 15 coupled models that are participated in phase 5 of the Coupled Model Intercomparison Project (CMIP5). To optimally estimate future change and its uncertainty, groups of best models are selected based on Taylor diagrams, relative entropy, and probability density function (PDF) methods previously suggested. Overall, the best models' multi-model ensemble based on Taylor diagrams has the lowest errors in reproducing temperature extremes in the present climate among three methods. Selected best models in three methods tend to project considerably different changes in the extreme indices from each other, indicating that the selection of reliable models are of critical importance to reduce uncertainties. Three groups of best models show significant increase of summerbased indices but decrease of the winter-based indices. Over East Asia, the most significant increase is seen in the HD (336 ± 23.4% of current climate) and the most significant decrease is appeared in the HDD (82 ± 4.2%). It is suggested that the larger future change in the HD is found over in the Southeastern China region, probably due to a higher local maximum temperature in the present climate. All of the indices show the largest uncertainty over Southeastern China, particularly in the TN (~3.9 times as large as uncertainty over East Asia) and in the HD (~2.4). It is further noted that the TN reveals the largest uncertainty over three East Asian countries (~1.7 and 1.4 over Korea and
DEFF Research Database (Denmark)
Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent
2012-01-01
The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...
Khodabandeloo, Babak; Melvin, Dyan; Jo, Hongki
2017-11-17
Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well.
Uncertainties of reverberation time estimation via adaptively identified room impulse responses.
Wu, Lifu; Qiu, Xiaojun; Burnett, Ian; Guo, Yecai
2016-03-01
This paper investigates the reverberation time estimation methods which employ backward integration of adaptively identified room impulse responses (RIRs). Two kinds of conditions are considered; the first is the "ideal condition" where the anechoic and reverberant signals are both known a priori so that the RIRs can be identified using system identification methods. The second is that only the reverberant speech signal is available, and blind identification of the RIRs via dereverberation is employed for reverberation time estimation. Results show that under the "ideal condition," the average relative errors in 7 octave bands are less than 2% for white noise and 15% for speech, respectively, when both the anechoic and reverberant signals are available. In contrast, under the second condition, the average relative errors of the blindly identified RIR-based reverberation time estimation are around 20%-30% except the 63 Hz octave band. The fluctuation of reverberation times estimated under the second condition is more severe than that under the ideal condition and the relative error for low frequency octave bands is larger than that for high octave bands under both conditions.
Berne, A.D.; Uijlenhoet, R.
2007-01-01
Microwave links can be used to estimate the path-averaged rain rate along the link when precipitation occurs. They take advantage of the near proportionality between the specific attenuation affecting the link signal and the rain rate. This paper deals with the influence of the spatial variability
A chance constraint estimation approach to optimizing resource management under uncertainty
Michael Bevers
2007-01-01
Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...
How to make 137Cs erosion estimation more useful: An uncertainty perspective
The cesium-137 technique has been widely used in the past 50 years to provide quantitative soil redistribution estimates at a point scale. Recently its usefulness has been challenged by a few researchers questioning the validity of the key assumption that the spatial distribution of fallout cesium-...
Uncertainties in Instantaneous Rainfall Rate Estimates: Satellite vs. Ground-Based Observations
Amitai, E.; Huffman, G. J.; Goodrich, D. C.
2012-12-01
High-resolution precipitation intensities are significant in many fields. For example, hydrological applications such as flood forecasting, runoff accommodation, erosion prediction, and urban hydrological studies depend on an accurate representation of the rainfall that does not infiltrate the soil, which is controlled by the rain intensities. Changes in the rain rate pdf over long periods are important for climate studies. Are our estimates accurate enough to detect such changes? While most evaluation studies are focusing on the accuracy of rainfall accumulation estimates, evaluation of instantaneous rainfall intensity estimates is relatively rare. Can a speceborne radar help in assessing ground-based radar estimates of precipitation intensities or is it the other way around? In this presentation we will provide some insight on the relative accuracy of instantaneous precipitation intensity fields from satellite and ground-based observations. We will examine satellite products such as those from the TRMM Precipitation Radar and those from several passive microwave imagers and sounders by comparing them with advanced high-resolution ground-based products taken at overpass time (snapshot comparisons). The ground based instantaneous rain rate fields are based on in situ measurements (i.e., the USDA/ARS Walnut Gulch dense rain gauge network), remote sensing observations (i.e., the NOAA/NSSL NMQ/Q2 radar-only national mosaic), and multi-sensor products (i.e., high-resolution gauge adjusted radar national mosaics, which we have developed by applying a gauge correction on the Q2 products).
Uncertainty in eddy covariance flux estimates resulting from spectral attenuation [Chapter 4
W. J. Massman; R. Clement
2004-01-01
Surface exchange fluxes measured by eddy covariance tend to be underestimated as a result of limitations in sensor design, signal processing methods, and finite flux-averaging periods. But, careful system design, modern instrumentation, and appropriate data processing algorithms can minimize these losses, which, if not too large, can be estimated and corrected using...
Zastrau, David
2017-01-01
Wind drives in combination with weather routing can lower the fuel consumption of cargo ships significantly. For this reason, the author describes a mathematical method based on quantile regression for a probabilistic estimate of the wind propulsion force on a ship route.
Breugel, van M.; Ransijn, J.; Craven, D.; Bongers, F.; Hall, J.
2011-01-01
Secondary forests are a major terrestrial carbon sink and reliable estimates of their carbon stocks are pivotal for understanding the global carbon balance and initiatives to mitigate CO2 emissions through forest management and reforestation. A common method to quantify carbon stocks in forests is
Shindo, J.; Bregt, A.K.; Hakamata, T.
1995-01-01
A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed
Uncertainty in estimated values of forestry project: a case study of ...
African Journals Online (AJOL)
The information obtained were analyzed using Net Present Value, Benefit-Cost Ratio, Economic Rate of Return and Sensitivity Analysis. The results of this study indicate that the NPV and B/C ratio were sensitive to increase in discount factor. The values of estimates for a direct and taungya plantatiomn at Ago-Owu forest ...
Werner, Micha; Westerhoff, Rogier; Moore, Catherine
2017-04-01
Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model
Energy Technology Data Exchange (ETDEWEB)
Shahnam, Mehrdad [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Gel, Aytekin [ALPEMI Consulting, LLC, Phoeniz, AZ (United States); Subramaniyan, Arun K. [GE Global Research Center, Niskayuna, NY (United States); Musser, Jordan [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Dietiker, Jean-Francois [West Virginia Univ. Research Corporation, Morgantown, WV (United States)
2017-10-02
Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has the most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows
Fragoulis, George; Merli, Annalisa; Reeves, Graham; Meregalli, Giovanna; Stenberg, Kristofer; Tanaka, Taku; Capri, Ettore
2011-06-01
Quinoxyfen is a fungicide of the phenoxyquinoline class used to control powdery mildew, Uncinula necator (Schw.) Burr. Owing to its high persistence and strong sorption in soil, it could represent a risk for soil organisms if they are exposed at ecologically relevant concentrations. The objective of this paper is to predict the bioconcentration factors (BCFs) of quinoxyfen in earthworms, selected as a representative soil organism, and to assess the uncertainty in the estimation of this parameter. Three fields in each of four vineyards in southern and northern Italy were sampled over two successive years. The measured BCFs varied over time, possibly owing to seasonal changes and the consequent changes in behaviour and ecology of earthworms. Quinoxyfen did not accumulate in soil, as the mean soil concentrations at the end of the 2 year monitoring period ranged from 9.16 to 16.0 µg kg⁻¹ dw for the Verona province and from 23.9 to 37.5 µg kg⁻¹ dw for the Taranto province, with up to eight applications per season. To assess the uncertainty of the BCF in earthworms, a probabilistic approach was used, firstly by building with weighted bootstrapping techniques a generic probabilistic density function (PDF) accounting for variability and incompleteness of knowledge. The generic PDF was then used to derive prior distribution functions, which, by application of Bayes' theorem, were updated with the new measurements and a posterior distribution was finally created. The study is a good example of probabilistic risk assessment. The means of mean and SD posterior estimates of log BCFworm (2.06, 0.91) are the 'best estimate values'. Further risk assessment of quinoxyfen and other phenoxyquinoline fungicides and realistic representative scenarios for modelling exercises required for future authorization and post-authorization requirements can now use this value as input. Copyright © 2011 Society of Chemical Industry.
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...... uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types...
Importance of tree basic density in biomass estimation and associated uncertainties
DEFF Research Database (Denmark)
Njana, Marco Andrew; Meilby, Henrik; Eid, Tron
2016-01-01
of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes......Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground....../belowground basic density was better than for common models developed without either basic density. However, species-specific models developed without basic density performed better than common models including basic density. Context Reducing Emissions from Deforestation and forest degradation and the role...
DEFF Research Database (Denmark)
Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo
2012-01-01
and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... Area (MPA). As defined here, the potential AOO provides spatially-explicit measures of species ranges which are permanent in the time and scarcely affected by sampling bias. The overestimation of these measures may be reduced using higher thresholds of habitat suitability, but standard rules as the MPA...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...
Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign
Aronstein, David L.; Smith, J. Scott
2016-01-01
Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescope with a 6.5-meter diameter primary mirror. JWST is currently NASA's flagship mission and will be the premier space observatory of the next decade. ISIM contains four optical benches with nine unique instruments, including redundancies. ISIM was characterized at the Goddard Space Flight Center (GSFC) in Greenbelt, MD in a series of cryogenic vacuum tests using a telescope simulator. During these tests, phase-retrieval algorithms were used to characterize the instruments. The objective of this paper is to describe the Monte-Carlo simulations that were used to establish uncertainties (i.e., error bars) for the wavefronts of the various instruments in ISIM. Multiple retrieval algorithms were used in the analysis of ISIM phase-retrieval focus-sweep data, including an iterativetransform algorithm and a nonlinear optimization algorithm. These algorithms emphasize the recovery of numerous optical parameters, including low-order wavefront composition described by Zernike polynomial terms and high-order wavefront described by a point-by-point map, location of instrument best focus, focal ratio, exit-pupil amplitude, the morphology of any extended object, and optical jitter. The secondary objective of this paper is to report on the relative accuracies of these algorithms for the ISIM instrument tests, and a comparison of their computational complexity and their performance on central and graphical processing unit clusters. From a phase-retrieval perspective, the ISIM test campaign includes a variety of source illumination bandwidths, various image-plane sampling criteria above and below the Nyquist- Shannon
Rauniyar, S. P.; Protat, A.; Kanamori, H.
2017-05-01
This study investigates the regional and seasonal rainfall rate retrieval uncertainties within nine state-of-the-art satellite-based rainfall products over the Maritime Continent (MC) region. The results show consistently larger differences in mean daily rainfall among products over land, especially over mountains and along coasts, compared to over ocean, by about 20% for low to medium rain rates and 5% for heavy rain rates. However, rainfall differences among the products do not exhibit any seasonal dependency over both surface types (land and ocean) of the MC region. The differences between products largely depends on the rain rate itself, with a factor 2 difference for light rain and 30% for intermediate and high rain rates over ocean. The rain-rate products dominated by microwave measurements showed less spread among themselves over ocean compared to the products dominated by infrared measurements. Conversely, over land, the rain gauge-adjusted post-real-time products dominated by microwave measurements produced the largest spreads, due to the usage of different gauge analyses for the bias corrections. Intercomparisons of rainfall characteristics of these products revealed large discrepancies in detecting the frequency and intensity of rainfall. These satellite products are finally evaluated at subdaily, daily, monthly, intraseasonal, and seasonal temporal scales against high-quality gridded rainfall observations in the Sarawak (Malaysia) region for the 4 year period 2000-2003. No single satellite-based rainfall product clearly outperforms the other products at all temporal scales. General guidelines are provided for selecting a product that could be best suited for a particular application and/or temporal resolution.
2014-01-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time es...
Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.
2016-01-01
The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak ...
Estimating uncertainty of alcohol-attributable fractions for infectious and chronic diseases
Directory of Open Access Journals (Sweden)
Frick Hannah
2011-04-01
Full Text Available Abstract Background Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs with exposure and risk relations stemming from different sources. Methods The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs. Results The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases. Conclusions Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Dettmer, J.; Hossen, M. J.; Cummins, P. R.
2014-12-01
This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.
Cook, Bruce Douglas
NASA satellites Terra and Aqua orbit the Earth every 100 minutes and collect data that is used to compute an 8 day time series of gross photosynthesis and annual plant production for each square kilometer of the earth's surface. This is a remarkable technological and scientific achievement that permits continuous monitoring of plant production and quantification of CO2 fixed by the terrestrial biosphere. It also allows natural resource scientists and practitioners to identify global trends associated with land cover/use and climate change. Satellite-derived estimates of photosynthesis and plant production from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) generally agree with independent measurements from validation sites across the globe, but local biases and spatial uncertainties exist at the regional scale. This dissertation evaluates three sources of uncertainty associated with MODIS algorithms in the Great Lakes Region, and evaluates LiDAR (Light Detection and Ranging) remote sensing as a method for improving model inputs. Chapter 1 examines the robustness of model parameters and errors resulting from canopy disturbances, which were assessed by inversion of flux tower observations during a severe outbreak of forest tent caterpillars. Chapter 2 examines model logic errors in wetland ecosystems, focusing on surface water table fluctuations as a potential constraint to photosynthesis that is not accounted for in the MODIS algorithm. Chapter 3 examines errors associated with pixel size and poor state data, using fine spatial resolution LiDAR and multispectral satellite data to derive estimates plant production across a heterogeneous landscape in northern Wisconsin. Together, these papers indicate that light- and carbon-use efficiency models driven by remote sensing and surface meteorology data are capable of providing accurate estimates of plant production within stands and across landscapes of the Great Lakes Region. It is demonstrated that model
Reservoir capacity estimates in shale plays based on experimental adsorption data
Ngo, Tan
from different measurement techniques using representative fluids (such as CH4 and CO2) at elevated pressures, and the adsorbed density can range anywhere between the liquid and the solid state of the adsorbate. Whether these discrepancies are associated with the inherent heterogeneity of mudrocks and/or with poor data quality requires more experiments under well-controlled conditions. Nevertheless, it has been found in this study that methane GIP estimates can vary between 10-45% and 10-30%, respectively, depending on whether the free or the total amount of gas is considered. Accordingly, CO2 storage estimates range between 30-90% and 15-50%, due to the larger adsorption capacity and gas density at similar pressure and temperature conditions. A manometric system has been designed and built that allows measuring the adsorption of supercritical fluids in microporous materials. Preliminary adsorption tests have been performed using a microporous 13X zeolite and CO 2 as an adsorbing gas at a temperature of 25oC and 35oC and at pressures up to 500 psi. Under these conditions, adsorption is quantified with a precision of +/- 3%. However, relative differences up to 15-20% have been observed with respect to data published in the literature on the same adsorbent and at similar experimental conditions. While it cannot be fully explained with uncertainty analysis, this discrepancy can be reduced by improving experiment practice, thus including the application of a higher adsorbent's regeneration temperature, of longer equilibrium times and of a careful flushing of the system between the various experimental steps. Based on the results on 13X zeolite, virtual tests have been conducted to predict the performance of the manometric system to measure adsorption on less adsorbing materials, such as mudrocks. The results show that uncertainties in the estimated adsorbed amount are much more significant in shale material and they increase with increasing pressure. In fact, relative
Directory of Open Access Journals (Sweden)
Alvaro José Abackerli
2007-04-01
Full Text Available Os estudos de confiabilidade e os ensaios acelerados de vida vêm sendo empregados por um grande número de empresas, principalmente devido a sua importância no desenvolvimento de produtos. Os ensaios acelerados consistem em colocar o produto em funcionamento para avaliar a sua probabilidade de falha ao longo do tempo, determinando-se a partir disso as chances dele sobreviver a um determinado tempo de uso, chamado de missão e, muitas vezes, associado aos prazos de garantia. Nos ensaios acelerados, as chamadas cargas de estresse são tratadas como variáveis cujos valores são nominalmente definidos. Deste modo, nos testes acelerados não são ponderadas as incertezas inerentes ao arranjo experimental, tampouco suas influências nos resultados obtidos por meio dos testes. Neste trabalho, métodos de Monte Carlo e dados reais de ensaios acelerados são usados para ilustrar os efeitos das incertezas na vida prevista de relés. Por meio deles, mostra-se também o impacto da incerteza experimental nas decisões gerenciais sobre a vida do produto, durante o seu desenvolvimento. Os resultados indicam que a incerteza presente nos ensaios acelerados pode ser significativa, mostrando, portanto, sua relevância tanto no desenvolvimento do produto quanto na definição de períodos de garantia.Reliability and accelerated life testing have been increasingly used by companies due to their importance in product development. Accelerated life testing involves activating products under defined conditions and evaluating the -probability of their survival after a defined life time, usually called mission, which is closely related to product's warranty. Usually, the stress loads are set at nominal values during accelerated testing procedures. Therefore, accelerated test procedures do not account for either the actual experimental uncertainties related to experimental test conditions or their influences on test results. In this work, actual accelerated life testing
Directory of Open Access Journals (Sweden)
Patel S.M.
2013-01-01
Full Text Available A free-space transmission method has been used for reliable shielding effectiveness measurement of the easily available textile materials. Textiles with three different yarn densities were studied for their shielding effectiveness with the help of a vector network analyzer and laboratory calibrated two X-band horn antennas. The expressions of uncertainty estimation have been derived in accordance with the present free-space measurement setup for the calculated SE values. The measurements have shown that an electromagnetic energy can be maximum shielded up to 16.24 dB with measurement uncertainty less than 0.21 dB in 8.2 to 12.4 GHz range by a 160.85 μm textile. Thus, a thin textile with a high density can have higher shielding and this property mainly depends on its intrinsic structure, frequency range and thickness. This study promises the potential applications of such materials as a very cost effective shielding material at microwave frequencies with some modifications.
Min, Zhe; Ren, Hongliang; Meng, Max Q-H
2017-10-01
Accurate understanding of surgical tool-tip tracking error is important for decision making in image-guided surgery. In this Letter, the authors present a novel method to estimate/model surgical tool-tip tracking error in which they take pivot calibration uncertainty into consideration. First, a new type of error that is referred to as total target registration error (TTRE) is formally defined in a single-rigid registration. Target localisation error (TLE) in two spaces to be registered is considered in proposed TTRE formulation. With first-order approximation in fiducial localisation error (FLE) or TLE magnitude, TTRE statistics (mean, covariance matrix and root-mean-square (RMS)) are then derived. Second, surgical tool-tip tracking error in optical tracking system (OTS) frame is formulated using TTRE when pivot calibration uncertainty is considered. Finally, TTRE statistics of tool-tip in OTS frame are then propagated relative to a coordinate reference frame (CRF) rigid-body. Monte Carlo simulations are conducted to validate the proposed error model. The percentage passing statistical tests that there is no difference between simulated and theoretical mean and covariance matrix of tool-tip tracking error in CRF space is more than 90% in all test cases. The RMS percentage difference between simulated and theoretical tool-tip tracking error in CRF space is within 5% in all test cases.
Energy Technology Data Exchange (ETDEWEB)
Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been t