Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates
National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...
Directory of Open Access Journals (Sweden)
K. J. Franz
2011-11-01
Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.
Franz, K. J.; Hogue, T. S.
2011-11-01
The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP) systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE), and the Shuffle Complex Evolution Metropolis (SCEM). Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA) model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.
Directory of Open Access Journals (Sweden)
S. P. Urbanski
2011-12-01
Full Text Available Biomass burning emission inventories serve as critical input for atmospheric chemical transport models that are used to understand the role of biomass fires in the chemical composition of the atmosphere, air quality, and the climate system. Significant progress has been achieved in the development of regional and global biomass burning emission inventories over the past decade using satellite remote sensing technology for fire detection and burned area mapping. However, agreement among biomass burning emission inventories is frequently poor. Furthermore, the uncertainties of the emission estimates are typically not well characterized, particularly at the spatio-temporal scales pertinent to regional air quality modeling. We present the Wildland Fire Emission Inventory (WFEI, a high resolution model for non-agricultural open biomass burning (hereafter referred to as wildland fires, WF in the contiguous United States (CONUS. The model combines observations from the MODerate Resolution Imaging Spectroradiometer (MODIS sensors on the Terra and Aqua satellites, meteorological analyses, fuel loading maps, an emission factor database, and fuel condition and fuel consumption models to estimate emissions from WF.
WFEI was used to estimate emissions of CO (ECO and PM_{2.5} (EPM_{2.5} for the western United States from 2003–2008. The uncertainties in the inventory estimates of ECO and EPM_{2.5} (u_{ECO} and u_{EPM2.5}, respectively have been explored across spatial and temporal scales relevant to regional and global modeling applications. In order to evaluate the uncertainty in our emission estimates across multiple scales we used a figure of merit, the half mass uncertainty, ũ_{EX} (where X = CO or PM_{2.5}, defined such that for a given aggregation level 50% of total emissions occurred from elements with u_{EX} ũ_{EX}. The
Hartmann, A. J.
2016-12-01
Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for
Hyslop, Nicole P.; White, Warren H.
The Interagency Monitoring of Protected Visual Environments (IMPROVE) program is a cooperative measurement effort in the United States designed to characterize current visibility and aerosol conditions in scenic areas (primarily National Parks and Forests) and to identify chemical species and emission sources responsible for existing man-made visibility impairment. In 2003 and 2004, the IMPROVE network began operating collocated samplers at several sites to evaluate the precision of its aerosol measurements. This paper presents the precisions calculated from the collocated data according to the United States Environmental Protection Agency's guidelines Code of Federal Regulations [CFR, 1997. Revised requirements for designation of reference and equivalent methods for PM 2.5 and ambient air quality surveillance for particulate matter: final rule, 1997. Code of Federal Regulations. Part IV: Environmental Protection Agency, vol. 40 CFR Parts 53 and 58, pp. 71-72. Available from pdf>]. These values range from 4% for sulfate to 115% for the third elemental carbon fraction. Collocated precision tends to improve with increasing detection rates, is typically better when the analysis is performed on the whole filter instead of just a fraction of the filter, and is better for species that are predominantly in the smaller size fractions. The collocated precisions are also used to evaluate the accuracy of the uncertainty estimates that are routinely reported with the concentrations. For most species, the collocated precisions are worse than the precisions predicted by the reported uncertainties. These discrepancies suggest that some sources of uncertainty are not accounted for or have been underestimated.
Faris, A M; Wang, H-H; Tarone, A M; Grant, W E
2016-05-31
Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evaluation of uncertainty in field soil moisture estimations by cosmic-ray neutron sensing
Scheiffele, Lena Maria; Baroni, Gabriele; Schrön, Martin; Ingwersen, Joachim; Oswald, Sascha E.
2017-04-01
wheat (Pforzheim, 2013) and maize (Braunschweig, 2014) and differ in soil type and management. The results confirm a general good agreement between soil moisture estimated by CRNS and the soil moisture network. However, several sources of uncertainty were identified i.e., overestimation of dry conditions, strong effects of the additional hydrogen pools and an influence of the vertical soil moisture profile. Based on that, a global sensitivity analysis based on Monte Carlo sampling can be performed and evaluated in terms of soil moisture and footprint characteristics. The results allow quantifying the role of the different factors and identifying further improvements in the method.
Allan, Richard; Liu, Chunlei
2017-04-01
The net surface energy flux is central to the climate system yet observational limitations lead to substantial uncertainty (Trenberth and Fasullo, 2013; Roberts et al., 2016). A combination of satellite-derived radiative fluxes at the top of atmosphere (TOA) adjusted using the latest estimation of the net heat uptake of the Earth system, and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis are used to estimate surface energy flux globally (Liu et al., 2015). Land surface fluxes are adjusted through a simple energy balance approach using relations at each grid point with the consideration of snowmelt to improve regional realism. The energy adjustment is redistributed over the oceans using a weighting function to avoid meridional discontinuities. Uncertainties in surface fluxes are investigated using a variety of approaches including comparison with a range of atmospheric reanalysis input data and products. Zonal multiannual mean surface flux uncertainty is estimated to be less than 5 Wm-2 but much larger uncertainty is likely for regional monthly values. The meridional energy transport is calculated using the net surface heat fluxes estimated in this study and the result shows better agreement with observations in Atlantic than before. The derived turbulent fluxes (difference between the net heat flux and the CERES EBAF radiative flux at surface) also have good agreement with those from OAFLUX dataset and buoy observations. Decadal changes in the global energy budget and the hemisphere energy imbalances are quantified and present day cross-equator heat transports is re-evaluated as 0.22±0.15 PW southward by the atmosphere and 0.32±0.16 PW northward by the ocean considering the observed ocean heat sinks (Roemmich et al., 2006) . Liu et al. (2015) Combining satellite observations and reanalysis energy transports to estimate global net surface energy fluxes 1985-2012. J. Geophys. Res., Atmospheres. ISSN 2169-8996 doi: 10.1002/2015JD
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-06-01
Full Text Available The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Park, Jae Phil; Bahn, Chi Bum
2016-06-27
The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Kim, Hojin; Chen, Josephine; Phillips, Justin; Pukala, Jason; Yom, Sue S; Kirby, Neil
2017-01-01
Deformable image registration is a powerful tool for mapping information, such as radiation therapy dose calculations, from one computed tomography image to another. However, deformable image registration is susceptible to mapping errors. Recently, an automated deformable image registration evaluation of confidence tool was proposed to predict voxel-specific deformable image registration dose mapping errors on a patient-by-patient basis. The purpose of this work is to conduct an extensive analysis of automated deformable image registration evaluation of confidence tool to show its effectiveness in estimating dose mapping errors. The proposed format of automated deformable image registration evaluation of confidence tool utilizes 4 simulated patient deformations (3 B-spline-based deformations and 1 rigid transformation) to predict the uncertainty in a deformable image registration algorithm's performance. This workflow is validated for 2 DIR algorithms (B-spline multipass from Velocity and Plastimatch) with 1 physical and 11 virtual phantoms, which have known ground-truth deformations, and with 3 pairs of real patient lung images, which have several hundred identified landmarks. The true dose mapping error distributions closely followed the Student t distributions predicted by automated deformable image registration evaluation of confidence tool for the validation tests: on average, the automated deformable image registration evaluation of confidence tool-produced confidence levels of 50%, 68%, and 95% contained 48.8%, 66.3%, and 93.8% and 50.1%, 67.6%, and 93.8% of the actual errors from Velocity and Plastimatch, respectively. Despite the sparsity of landmark points, the observed error distribution from the 3 lung patient data sets also followed the expected error distribution. The dose error distributions from automated deformable image registration evaluation of confidence tool also demonstrate good resemblance to the true dose error distributions. Automated
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ ∑ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ ∑ NEE ˜ 4.0-13.5 g C
A Study on Crack Initiation Test Condition by Uncertainty Evaluation of Weibull Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The goal of this work is to suggest proper experimental conditions for experimenters who want to develop probabilistic SCC initiation model by cracking test. Widely used MRR and MLE are considered as estimation methods of Weibull distribution. Stress Corrosion Cracking (SCC) is one of the main materials-related issues in operating nuclear reactors. From the result of the test, experimenters can estimate the parameters of Weibull distribution by Maximum Likelihood Estimation (MLE) or Median Rank Regression (MRR). However, in order to obtain the sufficient accuracy of the estimated Weibull model, it is hard for experimenters to determine the proper number of test specimens and censoring intervals. In this work, a comparison of MLE and MRR is performed by Monte Carlo simulation to quantify the effect of total number of specimen, test duration, censoring interval and shape parameter of the assumed true Weibull distribution. By using a Monte Carlo simulation, uncertainties of MRR and MLE estimators were quantified in various conditions of experimental cases. The following conclusions could be informative for the experimenters: 1) For all range of the simulation study, estimated scale parameters were more reliable than estimated shape parameters, especially for at high βtrue. 2) It is likely that the shape parameter is overestimated when the number of specimen is less than 25. For scale parameter estimation, MLE estimators have small bias as compared to the MRR estimators.
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
Yen, H.; Arabi, M.; Records, R.
2012-12-01
The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and
Directory of Open Access Journals (Sweden)
Fezzani Amor
2017-01-01
Full Text Available The performance of photovoltaic (PV module is affected by outdoor conditions. Outdoor testing consists installing a module, and collecting electrical performance data and climatic data over a certain period of time. It can also include the study of long-term performance under real work conditions. Tests are operated in URAER located in desert region of Ghardaïa (Algeria characterized by high irradiation and temperature levels. The degradation of PV module with temperature and time exposure to sunlight contributes significantly to the final output from the module, as the output reduces each year. This paper presents a comparative study of different methods to evaluate the degradation of PV module after a long term exposure of more than 12 years in desert region and calculates uncertainties in measuring. Firstly, this evaluation uses three methods: Visual inspection, data given by Solmetric PVA-600 Analyzer translated at Standard Test Condition (STC and based on the investigation results of the translation equations as ICE 60891. Secondly, the degradation rates calculated for all methods. Finally, a comparison between a degradation rates given by Solmetric PVA-600 analyzer, calculated by simulation model and calculated by two methods (ICE 60891 procedures 1, 2. We achieved a detailed uncertainty study in order to improve the procedure and measurement instrument.
Shindo, J.; Bregt, A.K.; Hakamata, T.
1995-01-01
A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed
Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach
Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel
2014-05-01
Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, W.; Tuxen, N.
2010-01-01
for each of the conceptual models considered. The probability distribution of mass discharge is obtained by combining all ensembles via BMA. The method was applied to a trichloroethylene-contaminated site located in northern Copenhagen. Four essentially different conceptual models based on two source zone......The estimation of mass discharges from contaminated sites is valuable when evaluating the potential risk to down-gradient receptors, when assessing the efficiency of a site remediation, or when determining the degree of natural attenuation. Given the many applications of mass discharge estimation...
Estimating uncertainty of data limited stock assessments
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro
2017-01-01
Many methods exist to assess the fishing status of data-limited stocks; however, little is known about the accuracy or the uncertainty of such assessments. Here we evaluate a new size-based data-limited stock assessment method by applying it to well-assessed, data-rich fish stocks treated as data......-limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...... assessments remarkably well and the official assessments fell well within the uncertainty bounds. The estimation of spawning stock biomass followed the same trends as the official assessment, but not the same levels. We conclude that the data-limited assessment method can be used for stock assessment...
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Directory of Open Access Journals (Sweden)
J. Timmermans
2013-04-01
Full Text Available Accurate estimation of global evapotranspiration is considered to be of great importance due to its key role in the terrestrial and atmospheric water budget. Global estimation of evapotranspiration on the basis of observational data can only be achieved by using remote sensing. Several algorithms have been developed that are capable of estimating the daily evapotranspiration from remote sensing data. Evaluation of remote sensing algorithms in general is problematic because of differences in spatial and temporal resolutions between remote sensing observations and field measurements. This problem can be solved in part by using soil-vegetation-atmosphere transfer (SVAT models, because on the one hand these models provide evapotranspiration estimations also under cloudy conditions and on the other hand can scale between different temporal resolutions. In this paper, the Soil Canopy Observation, Photochemistry and Energy fluxes (SCOPE model is used for the evaluation of the Surface Energy Balance System (SEBS model. The calibrated SCOPE model was employed to simulate remote sensing observations and to act as a validation tool. The advantages of the SCOPE model in this validation are (a the temporal continuity of the data, and (b the possibility of comparing different components of the energy balance. The SCOPE model was run using data from a whole growth season of a maize crop. It is shown that the original SEBS algorithm produces large uncertainties in the turbulent flux estimations caused by parameterizations of the ground heat flux and sensible heat flux. In the original SEBS formulation the fractional vegetation cover is used to calculate the ground heat flux. As this variable saturates very fast for increasing leaf area index (LAI, the ground heat flux is underestimated. It is shown that a parameterization based on LAI reduces the estimation error over the season from RMSE = 25 W m−2 to RMSE = 18 W m−2. In the original SEBS formulation the
Friberg, Mariel D.; Kahn, Ralph A.; Holmes, Heather A.; Chang, Howard H.; Sarnat, Stefanie Ebelt; Tolbert, Paige E.; Russell, Armistead G.; Mulholland, James A.
2017-06-01
Spatiotemporal characterization of ambient air pollutant concentrations is increasingly relying on the combination of observations and air quality models to provide well-constrained, spatially and temporally complete pollutant concentration fields. Air quality models, in particular, are attractive, as they characterize the emissions, meteorological, and physiochemical process linkages explicitly while providing continuous spatial structure. However, such modeling is computationally intensive and has biases. The limitations of spatially sparse and temporally incomplete observations can be overcome by blending the data with estimates from a physically and chemically coherent model, driven by emissions and meteorological inputs. We recently developed a data fusion method that blends ambient ground observations and chemical-transport-modeled (CTM) data to estimate daily, spatially resolved pollutant concentrations and associated correlations. In this study, we assess the ability of the data fusion method to produce daily metrics (i.e., 1-hr max, 8-hr max, and 24-hr average) of ambient air pollution that capture spatiotemporal air pollution trends for 12 pollutants (CO, NO2, NOx, O3, SO2, PM10, PM2.5, and five PM2.5 components) across five metropolitan areas (Atlanta, Birmingham, Dallas, Pittsburgh, and St. Louis), from 2002 to 2008. Three sets of comparisons are performed: (1) the CTM concentrations are evaluated for each pollutant and metropolitan domain, (2) the data fusion concentrations are compared with the monitor data, (3) a comprehensive cross-validation analysis against observed data evaluates the quality of the data fusion model simulations across multiple metropolitan domains. The resulting daily spatial field estimates of air pollutant concentrations and uncertainties are not only consistent with observations, emissions, and meteorology, but substantially improve CTM-derived results for nearly all pollutants and all cities, with the exception of NO2 for
Directory of Open Access Journals (Sweden)
Marta Castagna
2015-06-01
Full Text Available Evaluating the sustainability of water uses in shallow aquifers is fundamental for both environmental and socio-economic reasons. Groundwater models are the main tools to sustain informed management plans, yet simulation results are affected by both epistemic and parametric uncertainties. In this study, we aim at investigating the effect of model uncertainties on three assessment criteria: depth to water (DTW, recharge/discharge analysis and a newly defined sustainability index S. We consider, as a case study, the shallow aquifer of the Adige Valley, which is highly influenced by surface water dynamics, water withdrawals from pumping wells and a dense network of ditches. Both direct measurements and soft data are used to reduce uncertainty associated to the limited knowledge about the spatial distribution of the hydraulic parameters. Simulation results showed that the aquifer is chiefly influenced by the interaction with the Adige River and that the influence of anthropogenic activities on vulnerability of groundwater resources varies within the study area. This calls for differentiated approaches to water resources management. Uncertainty related to the three assessment criteria is chiefly controlled by uncertainty of the hydrogeological model, although it depends also on the strategy adopted for the management of water resources.
Estimating uncertainties in complex joint inverse problems
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related
Transferring model uncertainty estimates from gauged to ungauged catchments
Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L.
2014-07-01
Predicting streamflow hydrographs in ungauged catchments is a challenging issue, and accompanying the estimates with realistic uncertainty bounds is an even more complex task. In this paper, we present a method to transfer model uncertainty estimates from gauged to ungauged catchments and we test it over a set of 907 catchments located in France. We evaluate the quality of the uncertainty estimates based on three expected qualities: reliability, sharpness, and overall skill. Our results show that the method holds interesting perspectives, providing in most cases reliable and sharp uncertainty bounds at ungauged locations.
Estimation of Modal Parameters and their Uncertainties
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...... is given showing how the uncertainty estimates can be used in applications such as damage detection....
Estimating the uncertainty in underresolved nonlinear dynamics
Energy Technology Data Exchange (ETDEWEB)
Chorin, Alelxandre; Hald, Ole
2013-06-12
The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.
Estimating uncertainty in resolution tests
CSIR Research Space (South Africa)
Goncalves, DP
2006-05-01
Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...
Estimating uncertainty in map intersections
Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker
2009-01-01
Traditionally, natural resource managers have asked the question "How much?" and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question "Where?" and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases...
Parameter Uncertainty in Exponential Family Tail Estimation
Landsman, Z.; Tsanakas, A.
2012-01-01
Actuaries are often faced with the task of estimating tails of loss distributions from just a few observations. Thus estimates of tail probabilities (reinsurance prices) and percentiles (solvency capital requirements) are typically subject to substantial parameter uncertainty. We study the bias and MSE of estimators of tail probabilities and percentiles, with focus on 1-parameter exponential families. Using asymptotic arguments it is shown that tail estimates are subject to significant positi...
Uncertainty Analysis in the Noise Parameters Estimation
Directory of Open Access Journals (Sweden)
Pawlik P.
2012-07-01
Full Text Available The new approach to the uncertainty estimation in modelling acoustic hazards by means of the interval arithmetic is presented in the paper. In the case of the noise parameters estimation the selection of parameters specifying the acoustic wave propagation in an open space as well as parameters which are required in a form of average values – often constitutes a difficult problem. In such case, it is necessary to determine the variance and then, related strictly to it, the uncertainty of model parameters. The application of the interval arithmetic formalism allows to estimate the input data uncertainties without the necessity of the determination their probability distribution, which is required by other methods of uncertainty assessment. A successive problem in the acoustic hazards estimation is a lack of the exact knowledge of the input parameters. In connection with the above, the analysis of the modelling uncertainty in dependence of inaccuracy of model parameters was performed. To achieve this aim the interval arithmetic formalism – representing the value and its uncertainty in a form of an interval – was applied. The proposed approach was illustrated by the example of the application the Dutch RMR SRM Method, recommended by the European Union Directive 2002/49/WE, in the railway noise modelling.
Uncertainty Measures of Regional Flood Frequency Estimators
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik
1995-01-01
Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...
Energy Technology Data Exchange (ETDEWEB)
Kim, H; Chen, J; Pouliot, J [University of California San Francisco, San Francisco, CA (United States); Pukala, J [UF Health Cancer Center at Orlando Health, Orlando, FL (United States); Kirby, N [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then, AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.
Orlecka-Sikora, Beata
2008-08-01
The cumulative distribution function (CDF) of magnitude of seismic events is one of the most important probabilistic characteristics in Probabilistic Seismic Hazard Analysis (PSHA). The magnitude distribution of mining induced seismicity is complex. Therefore, it is estimated using kernel nonparametric estimators. Because of its model-free character the nonparametric approach cannot, however, provide confidence interval estimates for CDF using the classical methods of mathematical statistics. To assess errors in the seismic events magnitude estimation, and thereby in the seismic hazard parameters evaluation in the nonparametric approach, we propose the use of the resampling methods. Resampling techniques applied to a one dataset provide many replicas of this sample, which preserve its probabilistic properties. In order to estimate the confidence intervals for the CDF of magnitude, we have developed an algorithm based on the bias corrected and accelerated method (BC a method). This procedure uses the smoothed bootstrap and second-order bootstrap samples. We refer to this algorithm as the iterated BC a method. The algorithm performance is illustrated through the analysis of Monte Carlo simulated seismic event catalogues and actual data from an underground copper mine in the Legnica-Głogów Copper District in Poland. The studies show that the iterated BC a technique provides satisfactory results regardless of the sample size and actual shape of the magnitude distribution.
Transferring global uncertainty estimates from gauged to ungauged catchments
Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L.
2015-05-01
Predicting streamflow hydrographs in ungauged catchments is challenging, and accompanying the estimates with realistic uncertainty bounds is an even more complex task. In this paper, we present a method to transfer global uncertainty estimates from gauged to ungauged catchments and we test it over a set of 907 catchments located in France, using two rainfall-runoff models. We evaluate the quality of the uncertainty estimates based on three expected qualities: reliability, sharpness, and overall skill. The robustness of the method to the availability of information on gauged catchments was also evaluated using a hydrometrical desert approach. Our results show that the method presents advantageous perspectives, providing reliable and sharp uncertainty bounds at ungauged locations in a majority of cases.
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Uncertainties in the estimation of Mmax
Indian Academy of Sciences (India)
40% using the three catalogues compiled based on different magnitude conversion relationships. The effect of the uncertainties has been then shown on the estimation of Mmax and the prob- abilities of occurrence of different magnitudes. It has been emphasized to consider the uncer- tainties and their quantification to carry ...
Parameter estimation uncertainty: Comparing apples and apples?
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests
Uncertainty relations for approximation and estimation
Energy Technology Data Exchange (ETDEWEB)
Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
2016-05-27
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Plurality of Type A evaluations of uncertainty
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
Uncertainty estimation by convolution using spatial statistics.
Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio
2006-10-01
Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Uncertainty Analysis of the Estimated Risk in Formal Safety Assessment
Directory of Open Access Journals (Sweden)
Molin Sun
2018-01-01
Full Text Available An uncertainty analysis is required to be carried out in formal safety assessment (FSA by the International Maritime Organization. The purpose of this article is to introduce the uncertainty analysis technique into the FSA process. Based on the uncertainty identification of input parameters, probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. An approach which combines the Monte Carlo random sampling of probability distribution functions with the a-cuts for fuzzy calculus is proposed to propagate the uncertainties. One output of the FSA process is societal risk (SR, which can be evaluated in the two-dimensional frequency–fatality (FN diagram. Thus, the confidence-level-based SR is presented to represent the uncertainty of SR in two dimensions. In addition, a method for time window selection is proposed to estimate the magnitude of uncertainties, which is an important aspect of modeling uncertainties. Finally, a case study is carried out on an FSA study on cruise ships. The results show that the uncertainty analysis of SR generates a two-dimensional area for a certain degree of confidence in the FN diagram rather than a single FN curve, which provides more information to authorities to produce effective risk control measures.
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Cantatore, Angela
2012-01-01
connector and a plastic toggle, a hearing aid component. These are measured using a commercial CT scanner. Traceability is transferred using tactile and optical coordinate measuring machines, which are used to produce reference measurements. Results show that measurements of diameter for both parts resulted...... measurement results using different measuring strategies applied in different inspection software packages for volume and surface data analysis. The strategy influence is determined by calculating the measurement uncertainty. This investigation includes measurements of two industrial items, an aluminium pipe...
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Uncertainty estimations for quantitative in vivo MRI T1 mapping
Polders, Daniel L.; Leemans, Alexander; Luijten, Peter R.; Hoogduin, Hans
2012-11-01
Mapping the longitudinal relaxation time (T1) of brain tissue is of great interest for both clinical research and MRI sequence development. For an unambiguous interpretation of in vivo variations in T1 images, it is important to understand the degree of variability that is associated with the quantitative T1 parameter. This paper presents a general framework for estimating the uncertainty in quantitative T1 mapping by combining a slice-shifted multi-slice inversion recovery EPI technique with the statistical wild-bootstrap approach. Both simulations and experimental analyses were performed to validate this novel approach and to evaluate the estimated T1 uncertainty in several brain regions across four healthy volunteers. By estimating the T1 uncertainty, it is shown that the variation in T1 within anatomic regions for similar tissue types is larger than the uncertainty in the measurement. This indicates that heterogeneity of the inspected tissue and/or partial volume effects can be the main determinants for the observed variability in the estimated T1 values. The proposed approach to estimate T1 and its uncertainty without the need for repeated measurements may also prove to be useful for calculating effect sizes that are deemed significant when comparing group differences.
Uncertainty estimation in finite fault inversion
Dettmer, Jan; Cummins, Phil R.; Benavente, Roberto
2016-04-01
This work considers uncertainty estimation for kinematic rupture models in finite fault inversion by Bayesian sampling. Since the general problem of slip estimation on an unknown fault from incomplete and noisy data is highly non-linear and currently intractable, assumptions are typically made to simplify the problem. These almost always include linearization of the time dependence of rupture by considering multiple discrete time windows, and a tessellation of the fault surface into a set of 'subfaults' whose dimensions are fixed below what is subjectively thought to be resolvable by the data. Even non-linear parameterizations are based on a fixed discretization. This results in over-parametrized models which include more parameters than resolvable by the data and require regularization criteria that stabilize the inversion. While it is increasingly common to consider slip uncertainties arising from observational error, the effects of the assumptions implicit in parameterization choices are rarely if ever considered. Here, we show that linearization and discretization assumptions can strongly affect both slip and uncertainty estimates and that therefore the selection of parametrizations should be included in the inference process. We apply Bayesian model selection to study the effect of parametrization choice on inversion results. The Bayesian sampling method which produces inversion results is based on a trans-dimensional rupture discretization which adapts the spatial and temporal parametrization complexity based on data information and does not require regularization. Slip magnitude, direction and rupture velocity are unknowns across the fault and causal first rupture times are obtained by solving the Eikonal equation for a spatially variable rupture-velocity field. The method provides automated local adaptation of rupture complexity based on data information and does not assume globally constant resolution. This is an important quality since seismic data do not
Uncertainty of Areal Rainfall Estimation Using Point Measurements
McCarthy, D.; Dotto, C. B. S.; Sun, S.; Bertrand-Krajewski, J. L.; Deletic, A.
2014-12-01
The spatial variability of precipitation has a great influence on the quantity and quality of runoff water generated from hydrological processes. In practice, point rainfall measurements (e.g., rain gauges) are often used to represent areal rainfall in catchments. The spatial rainfall variability is difficult to be precisely captured even with many rain gauges. Thus the rainfall uncertainty due to spatial variability should be taken into account in order to provide reliable rainfall-driven process modelling results. This study investigates the uncertainty of areal rainfall estimation due to rainfall spatial variability if point measurements are applied. The areal rainfall is usually estimated as a weighted sum of data from available point measurements. The expected error of areal rainfall estimates is 0 if the estimation is an unbiased one. The variance of the error between the real and estimated areal rainfall is evaluated to indicate the uncertainty of areal rainfall estimates. This error variance can be expressed as a function of variograms, which was originally applied in geostatistics to characterize a spatial variable. The variogram can be evaluated using measurements from a dense rain gauge network. The areal rainfall errors are evaluated in two areas with distinct climate regimes and rainfall patterns: Greater Lyon area in France and Melbourne area in Australia. The variograms of the two areas are derived based on 6-minute rainfall time series data from 2010 to 2013 and are then used to estimate uncertainties of areal rainfall represented by different numbers of point measurements in synthetic catchments of various sizes. The error variance of areal rainfall using one point measurement in the centre of a 1-km2 catchment is 0.22 (mm/h)2 in Lyon. When the point measurement is placed at one corner of the same-size catchment, the error variance becomes 0.82 (mm/h)2 also in Lyon. Results for Melbourne were similar but presented larger uncertainty. Results
Evaluation of uncertainty of measurement for cellulosic fiber and ...
African Journals Online (AJOL)
DR OKE
In the estimation of uncertainty of measurement, this study employs sisal fiber and isotactic polypropylene matrix in preparation of the composite for evaluation. Uncertainty of measurement was evaluated based on tensile test results for a composite material prepared from sisal fiber having undergone chemical modification ...
Sources of uncertainty in annual forest inventory estimates
Ronald E. McRoberts
2000-01-01
Although design and estimation aspects of annual forest inventories have begun to receive considerable attention within the forestry and natural resources communities, little attention has been devoted to identifying the sources of uncertainty inherent in these systems or to assessing the impact of those uncertainties on the total uncertainties of inventory estimates....
Gaussian process interpolation for uncertainty estimation in image registration.
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.
Motion estimation under location uncertainty for turbulent fluid flows
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
Desportes, Charles; Drévillon, Marie; Drillet, Yann; Garric, Gilles; Parent, Laurent; Régnier, Charly; Masina, Simona; Storto, Andrea; Petterson, Drew; Wood, Richard; Balmaseda, Magdalena; Zuo, Hao
2017-04-01
Global ocean reanalyses are homogeneous 3D gridded descriptions of the physical state of the ocean spanning several decades, produced with a numerical ocean model constrained with data assimilation of satellite and in situ observations. The evaluation of global ocean reanalyses, and of how well they capture ocean variability, has progressed these recent years thanks to the CLIVAR/GSOP/GODAE Ocean Reanalyses Intercomparison Project ORA-IP (Balmaseda et al 2015). During the MyOcean project, several high resolution (1/4° horizontal grid) reanalyses based on NEMO but produced with different tunings and by different institutes, were evaluated jointly using common validation guidelines (Masina et al, 2015). The Copernicus Marine Environment Monitoring Service CMEMS (marine.copernicus.eu) Global Monitoring and Forecasting Center now takes advantage of the diversity of ocean reanalyses currently developed with that same NEMO model grid (ORCA025 at ¼°) to propose a multi-model ensemble product, which spread allows uncertainties or error bars to be estimated. In a number of regions, the ensemble mean may even provide a more reliable estimate than any individual reanalysis product. Four reanalyses have been selected to contribute to the project; GLORYS2V4 from Mercator Ocean (Fr), ORAS5 from ECMWF, FOAM/GloSea from Met Office (UK), and C-GLORS from CMCC (It). The four different time series of global ocean 3D monthly estimates have been post-processed to create the new product called GREP (Global Reanalysis Ensemble Product), covering the recent period during which altimetry observations are available: 1993-2015. Starting from April 20th 2017, the ensemble mean and standard deviation of the ensemble, as well as the four individual members for the period 1993-2015, are thus made available on a 1°x1° grid and monthly frequency. The time series will be extended by one year each year. In the presentation, we will describe the results of the scientific qualification of the
A novel workflow for seismic net pay estimation with uncertainty
Glinsky, Michael E.; Baptiste, Dale; Unaldi, Muhlis; Nagassar, Vishal
2016-01-01
This paper presents a novel workflow for seismic net pay estimation with uncertainty. It is demonstrated on the Cassra/Iris Field. The theory for the stochastic wavelet derivation (which estimates the seismic noise level along with the wavelet, time-to-depth mapping, and their uncertainties), the stochastic sparse spike inversion, and the net pay estimation (using secant areas) along with its uncertainty; will be outlined. This includes benchmarking of this methodology on a synthetic model. A...
REDD+ emissions estimation and reporting: dealing with uncertainty
Pelletier, Johanne; Martin, Davy; Potvin, Catherine
2013-09-01
used to evaluate reference level and emission reductions would strengthen the credibility of the system by promoting accountability and transparency. To secure conservativeness and deal with uncertainty, we consider the need for further research using real data available to developing countries to test the applicability of conservative discounts including the trend uncertainty and other possible options that would allow real incentives and stimulate improvements over time. Finally, we argue that REDD+ result-based actions assessed on the basis of a dashboard of performance indicators, not only in ‘tonnes CO2 equ. per year’ might provide a more holistic approach, at least until better accuracy and certainty of forest carbon stocks emission and removal estimates to support a REDD+ policy can be reached.
Estimating real-time predictive hydrological uncertainty
Verkade, J.S.
2015-01-01
Flood early warning systems provide a potentially highly effective flood risk reduction measure. The effectiveness of early warning, however, is affected by forecasting uncertainty: the impossibility of knowing, in advance, the exact future state of hydrological systems. Early warning systems
Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings
Payzan-LeNestour, Elise; Bossaerts, Peter
2011-01-01
Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating. PMID:21283774
Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.
Directory of Open Access Journals (Sweden)
Elise Payzan-LeNestour
Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
Measurement Uncertainty Estimation of a Robust Photometer Circuit
Directory of Open Access Journals (Sweden)
Jesús de Vicente
2009-04-01
Full Text Available In this paper the uncertainty of a robust photometer circuit (RPC was estimated. Here, the RPC was considered as a measurement system, having input quantities that were inexactly known, and output quantities that consequently were also inexactly known. Input quantities represent information obtained from calibration certificates, specifications of manufacturers, and tabulated data. Output quantities describe the transfer function of the electrical part of the photodiode. Input quantities were the electronic components of the RPC, the parameters of the model of the photodiode and its sensitivity at 670 nm. The output quantities were the coefficients of both numerator and denominator of the closed-loop transfer function of the RPC. As an example, the gain and phase shift of the RPC versus frequency was evaluated from the transfer function, with their uncertainties and correlation coefficient. Results confirm the robustness of photodiode design.
Traceability and uncertainty estimation in coordinate metrology
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Savio, Enrico; De Chiffre, Leonardo
2001-01-01
are required. Depending on the requirements for uncertainty level, different approaches may be adopted to achieve traceability. Especially in the case of complex measurement situations and workpieces the procedures are not trivial. This paper discusses the establishment of traceability in coordinate metrology...
Uncertainty in Forest Net Present Value Estimations
Directory of Open Access Journals (Sweden)
Ilona Pietilä
2010-09-01
Full Text Available Uncertainty related to inventory data, growth models and timber price fluctuation was investigated in the assessment of forest property net present value (NPV. The degree of uncertainty associated with inventory data was obtained from previous area-based airborne laser scanning (ALS inventory studies. The study was performed, applying the Monte Carlo simulation, using stand-level growth and yield projection models and three alternative rates of interest (3, 4 and 5%. Timber price fluctuation was portrayed with geometric mean-reverting (GMR price models. The analysis was conducted for four alternative forest properties having varying compartment structures: (A a property having an even development class distribution, (B sapling stands, (C young thinning stands, and (D mature stands. Simulations resulted in predicted yield value (predicted NPV distributions at both stand and property levels. Our results showed that ALS inventory errors were the most prominent source of uncertainty, leading to a 5.1–7.5% relative deviation of property-level NPV when an interest rate of 3% was applied. Interestingly, ALS inventory led to significant biases at the property level, ranging from 8.9% to 14.1% (3% interest rate. ALS inventory-based bias was the most significant in mature stand properties. Errors related to the growth predictions led to a relative standard deviation in NPV, varying from 1.5% to 4.1%. Growth model-related uncertainty was most significant in sapling stand properties. Timber price fluctuation caused the relative standard deviations ranged from 3.4% to 6.4% (3% interest rate. The combined relative variation caused by inventory errors, growth model errors and timber price fluctuation varied, depending on the property type and applied rates of interest, from 6.4% to 12.6%. By applying the methodology described here, one may take into account the effects of various uncertainty factors in the prediction of forest yield value and to supply the
Parsekian, A D; Dlubac, K; Grunewald, E; Butler, J J; Knight, R; Walsh, D O
2015-01-01
Characterization of hydraulic conductivity (K) in aquifers is critical for evaluation, management, and remediation of groundwater resources. While estimates of K have been traditionally obtained using hydraulic tests over discrete intervals in wells, geophysical measurements are emerging as an alternative way to estimate this parameter. Nuclear magnetic resonance (NMR) logging, a technology once largely applied to characterization of deep consolidated rock petroleum reservoirs, is beginning to see use in near-surface unconsolidated aquifers. Using a well-known rock physics relationship-the Schlumberger Doll Research (SDR) equation--K and porosity can be estimated from NMR water content and relaxation time. Calibration of SDR parameters is necessary for this transformation because NMR relaxation properties are, in part, a function of magnetic mineralization and pore space geometry, which are locally variable quantities. Here, we present a statistically based method for calibrating SDR parameters that establishes a range for the estimated parameters and simultaneously estimates the uncertainty of the resulting K values. We used co-located logging NMR and direct K measurements in an unconsolidated fluvial aquifer in Lawrence, Kansas, USA to demonstrate that K can be estimated using logging NMR to a similar level of uncertainty as with traditional direct hydraulic measurements in unconsolidated sediments under field conditions. Results of this study provide a benchmark for future calibrations of NMR to obtain K in unconsolidated sediments and suggest a method for evaluating uncertainty in both K and SDR parameter values. © 2014, National Ground Water Association.
Uncertainty in ERP Effort Estimation: A Challenge or an Asset?
Daneva, Maia; Wettflower, Seanna; de Boer, Sonia; Dumke, R.; Braungarten, B.; Bueren, G.; Abran, A.; Cuadrado-Gallego, J.
2008-01-01
Traditionally, software measurement literature considers the uncertainty of cost drivers in project estimation as a challenge and treats it as such. This paper develops the position that uncertainty can be seen as an asset. It draws on results of a case study in which we replicated an approach to
Evaluation of uncertainty in the Norwegian emission inventory
Energy Technology Data Exchange (ETDEWEB)
Rypdal, Kristin
1999-10-01
The uncertainty in estimating discharges is systematically examined for all source categories in the IPCC standard report. The uncertainty in the values is estimated quantitatively. This indicates an uncertainty in the yearly discharge of climatic gases in Norway of {+-} 10-20 %. The methane discharge from waste deposits, laughing gas from agriculture and perfluoric carbons from the aluminium production contribute to the major uncertainties in the climatic gas account. The uncertainty tendency (percentage change from a basic year to a final year) is estimated by aid of sensitivity analysis. The analysis indicate that a reduction or increase in the discharge of climatic gases in percentage (expressed in CO{sub 2} equivalents) compared to a basic year is relatively unaffected by mistakes in level and tendency for the single climatic gases. Exception exists for cases where the discharge of a climatic gas or a discharge from a single source that show substantially different tendencies from the tendency of the total discharges. A complete evaluation indicates that the uncertainty in tendency is more than {+-} 1 percentage point for the period of 1990 to 2010. The major routines used for avoiding mistakes in the account are assumed to be comparison with earlier estimates, with corresponding estimates from other counties and comparison of different calculation methods. 5 figs., 52 tabs., 12 refs.
Some methods of estimating uncertainty in accident reconstruction
Batista, Milan
2011-01-01
In the paper four methods for estimating uncertainty in accident reconstruction are discussed: total differential method, extreme values method, Gauss statistical method, and Monte Carlo simulation method. The methods are described and the program solutions are given.
Costs of sea dikes - regressions and uncertainty estimates
National Research Council Canada - National Science Library
Lenk, Stephan; Rybski, Diego; Heidrich, Oliver; Dawson, Richard J; Kropp, Jürgen P
2017-01-01
... – probabilistic functions of dike costs. Data from Canada and the Netherlands are analysed and related to published studies from the US, UK, and Vietnam in order to provide a reproducible estimate of typical sea dike costs and their uncertainty...
ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES
Energy Technology Data Exchange (ETDEWEB)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)
2012-05-20
In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.
Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements
Directory of Open Access Journals (Sweden)
Mohammad Sadeghi Sarcheshmah
2012-01-01
Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.
Estimation of measurement uncertainty arising from manual sampling of fuels.
Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios
2013-02-15
Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty. Copyright © 2012 Elsevier B.V. All rights reserved.
The duplicate method of uncertainty estimation: are eight targets enough?
Lyn, Jennifer A; Ramsey, Michael H; Coad, D Stephen; Damant, Andrew P; Wood, Roger; Boon, Katy A
2007-11-01
This paper presents methods for calculating confidence intervals for estimates of sampling uncertainty (s(samp)) and analytical uncertainty (s(anal)) using the chi-squared distribution. These uncertainty estimates are derived from application of the duplicate method, which recommends a minimum of eight duplicate samples. The methods are applied to two case studies--moisture in butter and nitrate in lettuce. Use of the recommended minimum of eight duplicate samples is justified for both case studies as the confidence intervals calculated using greater than eight duplicates did not show any appreciable reduction in width. It is considered that eight duplicates provide estimates of uncertainty that are both acceptably accurate and cost effective.
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.
2007-01-01
Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models of th...... models, is applied to two different sets of measured plant data. The computed uncertainty bounds cover the measured plant output, while the nominal prediction is outside these uncertainty bounds for some samples in these examples. ......Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models...... of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...
Uncertainty estimation for map-based analyses
Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker
2010-01-01
Traditionally, natural resource managers have asked the question, âHow much?â and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question, âWhere?â and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases, access to...
The effects of communicating uncertainty in quantitative health risk estimates.
Longman, Thea; Turner, Robin M; King, Madeleine; McCaffery, Kirsten J
2012-11-01
To examine the effects of communicating uncertainty in quantitative health risk estimates on participants' understanding, risk perception and perceived credibility of risk information source. 120 first year psychology students were given a hypothetical health-care scenario, with source of risk information (clinician, pharmaceutical company) varied between subjects and uncertainty (point, small range and large range risk estimate format) varied within subjects. The communication of uncertainty in the form of both a small and large range resulted in a reduction in accurate understanding and increased perceptions of risk when a large range was communicated compared to a point estimate. It also reduced perceptions of credibility of the information source, though for the clinician this was only the case when a large range was presented. The findings suggest that even for highly educated adults, communicating uncertainty as a range risk estimate has the potential to negatively affect understanding, increase risk perceptions and decrease perceived credibility. Communicating uncertainty in risk using a numeric range should be carefully considered by health-care providers. More research is needed to develop alternative strategies to effectively communicate the uncertainty in health risks to consumers. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
and property prediction models. While use of experimentally measured values for the needed properties is desirable in process design, the experimental data for the compounds of interest may not be available in many cases. Therefore, development of efficient and reliable property prediction methods and tools...... the results of uncertainty analysis to predict the uncertainties in process design. For parameter estimation, large data-sets of experimentally measured property values for a wide range of pure compounds are taken from the CAPEC database. Classical frequentist approach i.e., least square method is adopted......, critical temperature, acentric factor etc. In such cases, accurate property values along with uncertainty estimates are needed to perform sensitivity analysis and quantify the effects of these uncertainties on the process design. The objective of this work is to develop a systematic methodology to provide...
Uncertainty analysis for estimates of the first indirect aerosol effect
Directory of Open Access Journals (Sweden)
Y. Chen
2005-01-01
Full Text Available The IPCC has stressed the importance of producing unbiased estimates of the uncertainty in indirect aerosol forcing, in order to give policy makers as well as research managers an understanding of the most important aspects of climate change that require refinement. In this study, we use 3-D meteorological fields together with a radiative transfer model to examine the spatially-resolved uncertainty in estimates of the first indirect aerosol forcing. The global mean forcing calculated in the reference case is -1.30 Wm-2. Uncertainties in the indirect forcing associated with aerosol and aerosol precursor emissions, aerosol mass concentrations from different chemical transport models, aerosol size distributions, the cloud droplet parameterization, the representation of the in-cloud updraft velocity, the relationship between effective radius and volume mean radius, cloud liquid water content, cloud fraction, and the change in the cloud drop single scattering albedo due to the presence of black carbon are calculated. The aerosol burden calculated by chemical transport models and the cloud fraction are found to be the most important sources of uncertainty. Variations in these parameters cause an underestimation or overestimation of the indirect forcing compared to the base case by more than 0.6 Wm-2. Uncertainties associated with aerosol and aerosol precursor emissions, uncertainties in the representation of the aerosol size distribution (including the representation of the pre-industrial size distribution, and uncertainties in the representation of cloud droplet spectral dispersion effect cause uncertainties in the global mean forcing of 0.2~0.6 Wm-2. There are significant regional differences in the uncertainty associated with the first indirect forcing with the largest uncertainties in industrial regions (North America, Europe, East Asia followed by those in the major biomass burning regions.
Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo
2015-06-01
Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ćelap, Ivana; Vukasović, Ines; Juričić, Gordana; Šimundić, Ana-Maria
2017-10-15
The International vocabulary of metrology - Basic and general concepts and associated terms (VIM3, 2.26 measurement uncertainty, JCGM 200:2012) defines uncertainty of measurement as a non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information obtained from performing the measurement. Clinical Laboratory Standards Institute (CLSI) has published a very detailed guideline with a description of sources contributing to measurement uncertainty as well as different approaches for the calculation (Expression of measurement uncertainty in laboratory medicine; Approved Guideline, CLSI C51-A 2012). Many other national and international recommendations and original scientific papers about measurement uncertainty estimation have been published. In Croatia, the estimation of measurement uncertainty is obligatory for accredited medical laboratories. However, since national recommendations are currently not available, each of these laboratories uses a different approach in measurement uncertainty estimation. The main purpose of this document is to describe the minimal requirements for measurement uncertainty estimation. In such way, it will contribute to the harmonization of measurement uncertainty estimation, evaluation and reporting across laboratories in Croatia. This recommendation is issued by the joint Working group for uncertainty of measurement of the Croatian Society for Medical Biochemistry and Laboratory Medicine and Croatian Chamber of Medical Biochemists. The document is based mainly on the recommendations of Australasian Association of Clinical Biochemists (AACB) Uncertainty of Measurement Working Group and is intended for all medical biochemistry laboratories in Croatia.
Uncertainties in Transport Project Evaluation: Editorial
DEFF Research Database (Denmark)
Salling, Kim Bang; Nielsen, Otto Anker
2015-01-01
This following special issue of the European Journal of Transport Infrastructure Research (EJTIR) containing five scientific papers is the result of an open call for papers at the 1st International Conference on Uncertainties in Transport Project Evaluation that took place at the Technical...... University of Denmark, September 2013. The conference was held under the auspices of the project ‘Uncertainties in transport project evaluation’ (UNITE) which is a research project (2009-2014) financed by the Danish Strategic Research Agency. UNITE was coordinated by the Department of Transport...
Uncertainty Model For Quantitative Precipitation Estimation Using Weather Radars
Directory of Open Access Journals (Sweden)
Ernesto Gómez Vargas
2016-06-01
Full Text Available This paper introduces an uncertainty model for the quantitatively estimate precipitation using weather radars. The model considers various key aspects associated to radar calibration, attenuation, and the tradeoff between accuracy and radar coverage. An S-band-radar case study is presented to illustrate particular fractional-uncertainty calculations obtained to adjust various typical radar-calibration elements such as antenna, transmitter, receiver, and some other general elements included in the radar equation. This paper is based in “Guide to the expression of Uncertainty in measurement” and the results show that the fractional uncertainty calculated by the model was 40 % for the reflectivity and 30% for the precipitation using the Marshall Palmer Z-R relationship.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
A bootstrap method for estimating uncertainty of water quality trends
Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura
2015-01-01
Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
Directory of Open Access Journals (Sweden)
Yixin Wen
2016-11-01
Full Text Available Snow contributes to regional and global water budgets, and is of critical importance to water resources management and our society. Along with advancement in remote sensing tools and techniques to retrieve snowfall, verification and refinement of these estimates need to be performed using ground-validation datasets. A comprehensive evaluation of the Multi-Radar/Multi-Sensor (MRMS snowfall products and Integrated Multi-satellitE Retrievals for GPM (Global Precipitation Measurement (IMERG precipitation products is conducted using the Snow Telemetry (SNOTEL daily precipitation and Snow Water Equivalent (SWE datasets. Severe underestimations are found in both radar and satellite products. Comparisons are conducted as functions of air temperature, snowfall intensity, and radar beam height, in hopes of resolving the discrepancies between measurements by remote sensing and gauge, and finally developing better snowfall retrieval algorithms in the future.
Considerations for interpreting probabilistic estimates of uncertainty of forest carbon
James E. Smith; Linda S. Heath
2000-01-01
Quantitative estimated of carbon inventories are needed as part of nationwide attempts to reduce net release of greenhouse gases and the associated climate forcing. Naturally, an appreciable amount of uncertainty is inherent in such large-scale assessments, especially since both science and policy issues are still evolving. Decision makers need an idea of the...
Directory of Open Access Journals (Sweden)
Patel Kamlesh
2015-01-01
Full Text Available In this paper, the effects of the input quantity representations in linear and complex forms are analyzed to estimate mismatch uncertainty separately for one-port and two-port components. The mismatch uncertainties in power and attenuation measurements are evaluated for direct, ratio and substitution techniques with the use of a vector network analyzer system in the range of 1 to 18 GHz. The estimated mismatch uncertainties were compared for the same device under test and these values have verified that their evaluation is dependent on the representations of input quantities. In power measurements, the mismatch uncertainty is reduced when evaluating from the voltage standing wave ratio or reflection coefficient magnitudes in comparison to the complex reflection coefficients. The mismatch uncertainty in the attenuation measurements, are found higher and linearly increasing while estimating from the linear magnitude values than those from the S-parameters of the attenuator. Thus in practice, the mismatch uncertainty is estimated more accurately using the quantities measured in the same representations as of measuring quantity.
Effect of Uncertainties in Physical Property Estimates on Process Design - Sensitivity Analysis
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sin, Gürkan
can arise from the experiments itself or from the property models employed. It is important to consider the effect of these uncertainties on the process design in order to assess the quality and reliability of the final design. The main objective of this work is to develop a systematic methodology...... analysis was performed to evaluate the effect of these uncertainties on the process design. The developed methodology was applied to evaluate the effect of uncertainties in the property estimates on design of different unit operations such as extractive distillation, short path evaporator, equilibrium......, the operating conditions, and the choice of the property prediction models, the input uncertainties resulted in significant uncertainties in the final design. The developed methodology was able to: (i) assess the quality of final design; (ii) identify pure component and mixture properties of critical importance...
Variations of China's emission estimates: response to uncertainties in energy statistics
Hong, Chaopeng; Zhang, Qiang; He, Kebin; Guan, Dabo; Li, Meng; Liu, Fei; Zheng, Bo
2017-01-01
The accuracy of China's energy statistics is of great concern because it contributes greatly to the uncertainties in estimates of global emissions. This study attempts to improve the understanding of uncertainties in China's energy statistics and evaluate their impacts on China's emissions during the period of 1990-2013. We employed the Multi-resolution Emission Inventory for China (MEIC) model to calculate China's emissions based on different official data sets of energy statistics using the same emission factors. We found that the apparent uncertainties (maximum discrepancy) in China's energy consumption increased from 2004 to 2012, reaching a maximum of 646 Mtce (million tons of coal equivalent) in 2011 and that coal dominated these uncertainties. The discrepancies between the national and provincial energy statistics were reduced after the three economic censuses conducted during this period, and converging uncertainties were found in 2013. The emissions calculated from the provincial energy statistics are generally higher than those calculated from the national energy statistics, and the apparent uncertainty ratio (the ratio of the maximum discrepancy to the mean value) owing to energy uncertainties in 2012 took values of 30.0, 16.4, 7.7, 9.2 and 15.6 %, for SO2, NOx, VOC, PM2.5 and CO2 emissions, respectively. SO2 emissions are most sensitive to energy uncertainties because of the high contributions from industrial coal combustion. The calculated emission trends are also greatly affected by energy uncertainties - from 1996 to 2012, CO2 and NOx emissions, respectively, increased by 191 and 197 % according to the provincial energy statistics but by only 145 and 139 % as determined from the original national energy statistics. The energy-induced emission uncertainties for some species such as SO2 and NOx are comparable to total uncertainties of emissions as estimated by previous studies, indicating variations in energy consumption could be an important source of
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
A novel workflow for seismic net pay estimation with uncertainty
Glinsky, Michael E; Unaldi, Muhlis; Nagassar, Vishal
2016-01-01
This paper presents a novel workflow for seismic net pay estimation with uncertainty. It is demonstrated on the Cassra/Iris Field. The theory for the stochastic wavelet derivation (which estimates the seismic noise level along with the wavelet, time-to-depth mapping, and their uncertainties), the stochastic sparse spike inversion, and the net pay estimation (using secant areas) along with its uncertainty; will be outlined. This includes benchmarking of this methodology on a synthetic model. A critical part of this process is the calibration of the secant areas. This is done in a two step process. First, a preliminary calibration is done with the stochastic reflection response modeling using rock physics relationships derived from the well logs. Second, a refinement is made to the calibration to account for the encountered net pay at the wells. Finally, a variogram structure is estimated from the extracted secant area map, then used to build in the lateral correlation to the ensemble of net pay maps while matc...
Estimation uncertainty of direct monetary flood damage to buildings
Directory of Open Access Journals (Sweden)
B. Merz
2004-01-01
Full Text Available Traditional flood design methods are increasingly supplemented or replaced by risk-oriented methods which are based on comprehensive risk analyses. Besides meteorological, hydrological and hydraulic investigations such analyses require the estimation of flood impacts. Flood impact assessments mainly focus on direct economic losses using damage functions which relate property damage to damage-causing factors. Although the flood damage of a building is influenced by many factors, usually only inundation depth and building use are considered as damage-causing factors. In this paper a data set of approximately 4000 damage records is analysed. Each record represents the direct monetary damage to an inundated building. The data set covers nine flood events in Germany from 1978 to 1994. It is shown that the damage data follow a Lognormal distribution with a large variability, even when stratified according to the building use and to water depth categories. Absolute depth-damage functions which relate the total damage to the water depth are not very helpful in explaining the variability of the damage data, because damage is determined by various parameters besides the water depth. Because of this limitation it has to be expected that flood damage assessments are associated with large uncertainties. It is shown that the uncertainty of damage estimates depends on the number of flooded buildings and on the distribution of building use within the flooded area. The results are exemplified by a damage assessment for a rural area in southwest Germany, for which damage estimates and uncertainty bounds are quantified for a 100-year flood event. The estimates are compared to reported flood damages of a severe flood in 1993. Given the enormous uncertainty of flood damage estimates the refinement of flood damage data collection and modelling are major issues for further empirical and methodological improvements.
Handling uncertainty in quantitative estimates in integrated resource planning
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Wagner, C.G. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics
1995-01-01
This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Directory of Open Access Journals (Sweden)
Višnjevac Nenad
2013-01-01
Full Text Available Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simulations to estimate the uncertainty of flow accumulation. The case study area included the broader area of the Municipality of Čačak, where Monte Carlo stochastic simulations were applied in order to create one hundred possible outputs of flow accumulation. A statistical analysis was performed on the basis of these versions, and the "most likely" version of flow accumulation in association with its confidence bounds (standard deviation was created. Further, this paper describes the most important phases in the process of estimating uncertainty, such as variogram modelling and chooses the right number of simulations. Finally, it makes suggestions on how to effectively use and discuss the results and their practical significance.
Eigenspace perturbations for structural uncertainty estimation of turbulence closure models
Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca
2017-11-01
With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).
Uncertainty in geocenter estimates in the context of ITRF2014
Riddell, Anna R.; King, Matt A.; Watson, Christopher S.; Sun, Yu; Riva, Riccardo E. M.; Rietbroek, Roelof
2017-05-01
Uncertainty in the geocenter position and its subsequent motion affects positioning estimates on the surface of the Earth and downstream products such as site velocities, particularly the vertical component. The current version of the International Terrestrial Reference Frame, ITRF2014, derives its origin as the long-term averaged center of mass as sensed by satellite laser ranging (SLR), and by definition, it adopts only linear motion of the origin with uncertainty determined using a white noise process. We compare weekly SLR translations relative to the ITRF2014 origin, with network translations estimated from station displacements from surface mass transport models. We find that the proportion of variance explained in SLR translations by the model-derived translations is on average less than 10%. Time-correlated noise and nonlinear rates, particularly evident in the Y and Z components of the SLR translations with respect to the ITRF2014 origin, are not fully replicated by the model-derived translations. This suggests that translation-related uncertainties are underestimated when a white noise model is adopted and that substantial systematic errors remain in the data defining the ITRF origin. When using a white noise model, we find uncertainties in the rate of SLR X, Y, and Z translations of ±0.03, ±0.03, and ±0.06, respectively, increasing to ±0.13, ±0.17, and ±0.33 (mm/yr, 1 sigma) when a power law and white noise model is adopted.
Estimating abundance in the presence of species uncertainty
Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.
2016-01-01
1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.
Uncertainty related to Environmental Data and Estimated Extreme Events
DEFF Research Database (Denmark)
Burcharth, H. F.
The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertaintie...
Kadis, Rouvim
2017-05-26
The rational strategy in the evaluation of analytical measurement uncertainty is to combine the "whole method" performance data, such as precision and recovery, with the uncertainty contributions from sources not adequately covered by those data. This paper highlights some common mistakes in evaluating the uncertainty when pursuing that strategy, as revealed in current chromatographic literature. The list of the uncertainty components usually taken into account is discussed first and fallacies with the LOD- and recovery uncertainties are noted. Close attention is paid to the uncertainty arising from a linear calibration normally used. It is demonstrated that following a well-known formula for the standard deviation of an analytical result obtained from a straight line calibration leads to double counting the precision contribution to the uncertainty budget. Furthermore, the precision component itself is often estimated improperly, based on the number of replicates taken from the precision assessment experiment. As a result, the relative uncertainty from linear calibration is overestimated in the budget and may become the largest contribution to the combined uncertainty, which is clearly shown with an example calculation based on the literature data. Copyright © 2017 Elsevier B.V. All rights reserved.
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to
Estimation of Uncertainty in Risk Assessment of Hydrogen Applications
DEFF Research Database (Denmark)
Markert, Frank; Krymsky, V.; Kozine, Igor
2011-01-01
Hydrogen technologies such as hydrogen fuelled vehicles and refuelling stations are being tested in practice in a number of projects (e.g. HyFleet-Cute and Whistler project) giving valuable information on the reliability and maintenance requirements. In order to establish refuelling stations...... and extrapolations to be made. Therefore, the QRA results will contain varying degrees of uncertainty as some components are well established while others are not. The paper describes a methodology to evaluate the degree of uncertainty in data for hydrogen applications based on the bias concept of the total...... probability and the NUSAP concept to quantify uncertainties of new not fully qualified hydrogen technologies and implications to risk management....
Evaluating the uncertainty of input quantities in measurement models
Possolo, Antonio; Elster, Clemens
2014-06-01
The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
Impact of Uncertainty on Non-Medical Professionals' Estimates of Sexual Abuse Probability.
Fargason, Crayton A., Jr.; Peralta-Carcelen, Myriam C.; Fountain, Kathleen E.; Amaya, Michelle I.; Centor, Robert
1997-01-01
Assesses how an educational intervention describing uncertainty in child sexual-abuse assessments affects estimates of sexual abuse probability by non-physician child-abuse professionals (CAPs). Results, based on evaluations of 89 CAPs after the intervention, indicate they undervalued medical-exam findings and had difficulty adjusting for medical…
Estimation of uncertainties in the performance indices of an oxidation ditch benchmark
Abusam, A.; Keesman, K.J.; Spanjers, H.; Straten, van G.; Meinema, K.
2002-01-01
Estimation of the influence of different sources of uncertainty is very important in obtaining a thorough evaluation or a fair comparison of the various control strategies proposed for wastewater treatment plants. This paper illustrates, using real data obtained from a full-scale oxidation ditch
Uncertainty in peat volume and soil carbon estimated using ground-penetrating radar and probing
Andrew D. Parsekian; Lee Slater; Dimitrios Ntarlagiannis; James Nolan; Stephen D. Sebestyen; Randall K. Kolka; Paul J. Hanson
2012-01-01
Estimating soil C stock in a peatland is highly dependent on accurate measurement of the peat volume. In this study, we evaluated the uncertainty in calculations of peat volume using high-resolution data to resolve the three-dimensional structure of a peat basin based on both direct (push probes) and indirect geophysical (ground-penetrating radar) measurements. We...
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Communicating the uncertainty in estimated greenhouse gas emissions from agriculture.
Milne, Alice E; Glendining, Margaret J; Lark, R Murray; Perryman, Sarah A M; Gordon, Taylor; Whitmore, Andrew P
2015-09-01
In an effort to mitigate anthropogenic effects on the global climate system, industrialised countries are required to quantify and report, for various economic sectors, the annual emissions of greenhouse gases from their several sources and the absorption of the same in different sinks. These estimates are uncertain, and this uncertainty must be communicated effectively, if government bodies, research scientists or members of the public are to draw sound conclusions. Our interest is in communicating the uncertainty in estimates of greenhouse gas emissions from agriculture to those who might directly use the results from the inventory. We tested six methods of communication. These were: a verbal scale using the IPCC calibrated phrases such as 'likely' and 'very unlikely'; probabilities that emissions are within a defined range of values; confidence intervals for the expected value; histograms; box plots; and shaded arrays that depict the probability density of the uncertain quantity. In a formal trial we used these methods to communicate uncertainty about four specific inferences about greenhouse gas emissions in the UK. Sixty four individuals who use results from the greenhouse gas inventory professionally participated in the trial, and we tested how effectively the uncertainty about these inferences was communicated by means of a questionnaire. Our results showed differences in the efficacy of the methods of communication, and interactions with the nature of the target audience. We found that, although the verbal scale was thought to be a good method of communication it did not convey enough information and was open to misinterpretation. Shaded arrays were similarly criticised for being open to misinterpretation, but proved to give the best impression of uncertainty when participants were asked to interpret results from the greenhouse gas inventory. Box plots were most favoured by our participants largely because they were particularly favoured by those who worked
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the 57Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s-1 and 0.04-0.05 mm s-1, respectively, the latter being close to the average experimental uncertainty of 0.02 mm s-1. Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r2, or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical Mössbauer spectroscopy
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
Sensitivity of Process Design due to Uncertainties in Property Estimates
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent
2012-01-01
The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design.......The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels...... of accuracy for different thermo-physical property prediction models; and c) design variables versus properties relationships. The application of the methodology is illustrated through a case study of an extractive distillation process and sensitivity analysis of designs of various unit operations found...
Estimation of Model Uncertainties in Closed-loop Systems
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Poulsen, Niels Kjølstad
2008-01-01
This paper describe a method for estimation of parameters or uncertainties in closed-loop systems. The method is based on an application of the dual YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all systems stabilized by a given controller. The dual YJBK transfer function...... is a measure for the variation in the system seen through the feedback controller. It is shown that it is possible to isolate a certain number of parameters or uncertain blocks in the system exactly. This is obtained by modifying the feedback controller through the YJBK transfer function together with pre...
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES
Energy Technology Data Exchange (ETDEWEB)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ, 85734 (United States)
2013-09-20
The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.
Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops
Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said
2017-11-01
The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.
Climate data induced uncertainty in model-based estimations of terrestrial primary productivity
Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko
2017-06-01
Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due
Hullman, Jessica; Kay, Matthew; Kim, Yea-Seul; Shrestha, Samana
2017-08-29
People often have erroneous intuitions about the results of uncertain processes, such as scientific experiments. Many uncertainty visualizations assume considerable statistical knowledge, but have been shown to prompt erroneous conclusions even when users possess this knowledge. Active learning approaches been shown to improve statistical reasoning, but are rarely applied in visualizing uncertainty in scientific reports. We present a controlled study to evaluate the impact of an interactive, graphical uncertainty prediction technique for communicating uncertainty in experiment results. Using our technique, users sketch their prediction of the uncertainty in experimental effects prior to viewing the true sampling distribution from an experiment. We find that having a user graphically predict the possible effects from experiment replications is an effective way to improve one's ability to make predictions about replications of new experiments. Additionally, visualizing uncertainty as a set of discrete outcomes, as opposed to a continuous probability distribution, can improve recall of a sampling distribution from a single experiment. Our work has implications for various applications where it is important to elicit peoples' estimates of probability distributions and to communicate uncertainty effectively.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Directory of Open Access Journals (Sweden)
Janet L. Rachlow
2013-08-01
Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Examples of measurement uncertainty evaluations in accordance with the revised GUM
Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.
2016-11-01
The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.
Energy Technology Data Exchange (ETDEWEB)
Monier, C
1997-09-01
The research works of this thesis are aimed to evaluate the methods and the associated uncertainties for the material balances estimation of the burn-up UO{sub 2} and MOX fuels which intervene in the fuel cycle physics. The studies carried out are used to qualify the cycle `package` DARWIN for the PWRs material balances estimation. The elaboration and optimisation of the calculation routes are carried out following a very specific methodology, aimed at estimating the bias introduced by the modelizations simplification by a comparison with almost exact reference modelizations. Depending on the precision goals and the informations, the permissible approximation will be determined. Two calculation routes have been developed and the qualified by applying them to the used fuels isotopic analysis interpretation: one `industry-oriented` calculation route which can calculate full UO{sub 2} assemblies material balances with a 2 % precision on the main actinides, respecting the industrial specifications. This route must run with a reasonable calculation time and stay user-friendly; one reference calculation route for the precise interpretation of fuel samples made of pieces of burn-up MOX rods. Aiming to provide material balances with the best possible precision, this route does not have the same specifications concerning its use and its calculation time performance. (author)
Uncertainty estimates for the Bayes Inference Engine, (BIE)
Energy Technology Data Exchange (ETDEWEB)
Beery, Thomas A [Los Alamos National Laboratory
2009-01-01
In the fall 2007 meeting of the BIB users group, two approaches to making uncertainty estimates were presented. Ken Hanson asserted that if the BFGS optimizer was used, the inverse Hessian matrix was the same as the covariance matrix representing parameter uncertainties. John Pang presented preliminary results of a Monte Carlo method called Randomized Maximum Likelihood (RML). The BFGS/Hessian matrix approach may be applied to the region of the 'ideal model' Approximately 250 parameters describing the object density patches that are varied to match an image of 1,000,000 pixels. I cast this in terms of least squares analysis, as it is much better understood. This not as large a conceptual jump as some suppose because many of the functional blocks in the BIB are taken directly from existing least squares programs. If a Gaussian (normal) probability density function is assumed for both the observation and parameter errors, the Bayesian and least squares result should be identical.
Application of best estimate plus uncertainty in review of research reactor safety analysis
Directory of Open Access Journals (Sweden)
Adu Simon
2015-01-01
Full Text Available To construct and operate a nuclear research reactor, the licensee is required to obtain the authorization from the regulatory body. One of the tasks of the regulatory authority is to verify that the safety analysis fulfils safety requirements. Historically, the compliance with safety requirements was assessed using a deterministic approach and conservative assumptions. This provides sufficient safety margins with respect to the licensing limits on boundary and operational conditions. Conservative assumptions were introduced into safety analysis to account for the uncertainty associated with lack of knowledge. With the introduction of best estimate computational tools, safety analyses are usually carried out using the best estimate approach. Results of such analyses can be accepted by the regulatory authority only if appropriate uncertainty evaluation is carried out. Best estimate computer codes are capable of providing more realistic information on the status of the plant, allowing the prediction of real safety margins. The best estimate plus uncertainty approach has proven to be reliable and viable of supplying realistic results if all conditions are carefully followed. This paper, therefore, presents this concept and its possible application to research reactor safety analysis. The aim of the paper is to investigate the unprotected loss-of-flow transients "core blockage" of a miniature neutron source research reactor by applying best estimate plus uncertainty methodology. The results of our calculations show that the temperatures in the core are within the safety limits and do not pose any significant threat to the reactor, as far as the melting of the cladding is concerned. The work also discusses the methodology of the best estimate plus uncertainty approach when applied to the safety analysis of research reactors for licensing purposes.
DEFF Research Database (Denmark)
Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.
2012-01-01
propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same...... consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation......Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1...
Uncertainty Estimation in SiGe HBT Small-Signal Modeling
DEFF Research Database (Denmark)
Masood, Syed M.; Johansen, Tom Keinicke; Vidkjær, Jens
2005-01-01
An uncertainty estimation and sensitivity analysis is performed on multi-step de-embedding for SiGe HBT small-signal modeling. The uncertainty estimation in combination with uncertainty model for deviation in measured S-parameters, quantifies the possible error value in de-embedded two-port param...
Chaparro Molano, G.; Restrepo Gaitán, O. A.; Cuervo Marulanda, J. C.; Torres Arzayus, S. A.
2018-01-01
Obtaining individual estimates for uncertainties in redshift-independent galaxy distance measurements can be challenging, as for each galaxy there can be many distance estimates with non-gaussian distributions, some of which may not even have a reported uncertainty. We seek to model uncertainties using a bootstrap sampling of measurements per galaxy per distance estimation method. We then create a predictive bayesian model for estimating galaxy distance uncertainties that is better than simply using a weighted standard deviation. This can be a first step toward predicting distance uncertainties for future catalog-wide analysis.
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-01
In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.
Uncertainties associated with parameter estimation in atmospheric infrasound arrays.
Szuberla, Curt A L; Olson, John V
2004-01-01
This study describes a method for determining the statistical confidence in estimates of direction-of-arrival and trace velocity stemming from signals present in atmospheric infrasound data. It is assumed that the signal source is far enough removed from the infrasound sensor array that a plane-wave approximation holds, and that multipath and multiple source effects are not present. Propagation path and medium inhomogeneities are assumed not to be known at the time of signal detection, but the ensemble of time delays of signal arrivals between array sensor pairs is estimable and corrupted by uncorrelated Gaussian noise. The method results in a set of practical uncertainties that lend themselves to a geometric interpretation. Although quite general, this method is intended for use by analysts interpreting data from atmospheric acoustic arrays, or those interested in designing and deploying them. The method is applied to infrasound arrays typical of those deployed as a part of the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization.
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar
2012-05-01
Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.
Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John
2011-01-01
Mass discharge estimates are increasingly being used in the management of contaminated sites and uncertainties related to such estimates are therefore of great practical importance. We present a rigorous approach for quantifying the uncertainty in the mass discharge across a multilevel control...... plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...
Evaluation of cutting force uncertainty components in turning
DEFF Research Database (Denmark)
Axinte, Dragos Aurelian; Belluco, Walter; De Chiffre, Leonardo
2000-01-01
A procedure is proposed for the evaluation of those uncertainty components of a single cutting force measurement in turning that are related to the contributions of the dynamometer calibration and the cutting process itself. Based on an empirical model including errors form both sources......, the uncertainty for a single measurement of cutting force is presented, and expressions for the expected uncertainty vs. cutting parameters are proposed. This approach gives the possibility of evaluating cutting force uncertainty components in turning, for a defined range of cutting parameters, based on few...
Botto, A.; Ganora, D.; Laio, F.; Claps, P.
2012-04-01
Traditionally, flood frequency analysis has been used to assess the design discharge for hydraulic infrastructures. Unfortunately, this method involves uncertainties, be they of random or epistemic nature. Despite some success in measuring uncertainty, e.g. by means of numerical simulations, exhaustive methods for their evaluation are still an open challenge to the scientific community. The proposed method aims to improve the standard models for design flood estimation, considering the hydrological uncertainties inherent with the classic flood frequency analysis, in combination with cost-benefit analysis. Within this framework, two of the main issues related to flood risk are taken into account: on the one hand statistical flood frequency analysis is complemented with suitable uncertainty estimates; on the other hand the economic value of the flood-prone land is considered, as well as the economic losses in case of overflow. Consider a case where discharge data are available at the design site: the proposed procedure involves the following steps: (i) for a given return period T the design discharge is obtained using standard statistical inference (for example, using the GEV distribution and the method of L- moments to estimate the parameters); (ii) Monte Carlo simulations are performed to quantify the parametric uncertainty related to the design-flood estimator: 10000 triplets of L-moment values are randomly sampled from their relevant multivariate distribution, and 10000 values of the T-year discharge are obtained ; (iii) a procedure called the least total expected cost (LTEC) design approach is applied as described hereafter: linear cost and damage functions are proposed so that the ratio between the slope of the damage function and the slope of the cost function is equal to T. The expected total cost (sum of the cost plus the expected damage) is obtained for each of the 10000 design value estimators, and the estimator corresponding to the minimum total cost is
GLUE Based Marine X-Band Weather Radar Data Calibration and Uncertainty Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Beven, Keith; Thorndahl, Søren Liedtke
2015-01-01
The Generalized Likelihood Uncertainty Estimation methodology (GLUE) is investigated for radar rainfall calibration and uncertainty assessment. The method is used to calibrate radar data collected by a Local Area Weather Radar (LAWR). In contrast to other LAWR data calibrations, the method combines...... calibration with uncertainty estimation. Instead of searching for a single set of calibration parameters, the method uses the observations to construct distributions of the calibration parameters. These parameter sets provide valuable knowledge of parameter sensitivity and the uncertainty. Two approaches...... improves the performance significantly. It is found that even if the dynamic adjustment method is used the uncertainty of rainfall estimates can still be significant....
Uncertainty estimation and reconstruction of historical streamflow records
Kuentz, A.; Mathevet, T.; Perret, C.; Andréassian, V.
2012-04-01
Long historical series of streamflow are a precious source of information in the context of hydrological studies, such as research of trends or breaks due to climate variability or anthropogenic influences. For this kind of studies, it could be very important to go back as far as possible in the past, in order to highlight information content of historical observations. During our research we concentrate on the Durance watershed (14000 km2) in order to understand last century (1900-2010) hydrological variability due to climate changes and/or anthropogenic influences. This watershed, situated in the Alps, is characterized by variable hydrological processes (from snowy to Mediterranean regimes) and a wide range of anthropogenic influences (hydropower generation, irrigation, industries, drinking water, etc.). We are convinced that this research is necessary before any climate and hydrological projection. Documentary researches lead in collaboration with a historian allowed to find about ten long streamflow series from the beginnings of the 20th century on the Durance watershed. The analysis of theses series is necessary to better understand the natural hydrological behavior of the watershed, before the development of most of the anthropogenic influences. If the usefulness of such long streamflow series is obvious, they have some limitations, one of them being their heterogeneity, which can have many origins: shift of the gauging station, changes in the anthropogenic influences, or evolution in the methods used to build the series. Before their interpretation in terms of climate or land use changes, uncertainty estimation of historical streamflow records is therefore very important to assess data quality and homogeneity over time. This paper focuses on the estimation of the historical streamflow records uncertainty due to the evolution of their construction methods. Since the beginnings of the 20th century, we have listed three main methods of construction of daily
Uncertainties in peat volume and soil carbon estimated using ground penetrating radar and probing
Energy Technology Data Exchange (ETDEWEB)
Parsekian, Andrew D. [Rutgers University; Slater, Lee [Rutgers University; Ntarlagiannis, Dimitrios [Rutgers University; Nolan, James [Rutgers University; Sebestyen, Stephen D [USDA Forest Service, Grand Rapids, MN; Kolka, Randall K [USDA Forest Service, Grand Rapids, MN; Hanson, Paul J [ORNL
2012-01-01
We evaluate the uncertainty in calculations of peat basin volume using high-resolution data . to resolve the three-dimensional structure of a peat basin using both direct (push probes) and indirect geophysical (ground penetrating radar) measurements. We compared volumetric estimates from both approaches with values from literature. We identified subsurface features that can introduce uncertainties into direct peat thickness measurements including the presence of woody peat and soft clay or gyttja. We demonstrate that a simple geophysical technique that is easily scalable to larger peatlands can be used to rapidly and cost effectively obtain more accurate and less uncertain estimates of peat basin volumes critical to improving understanding of the total terrestrial carbon pool in peatlands.
Uncertainty Evaluation with Multi-Dimensional Model of LBLOCA in OPR1000 Plant
Energy Technology Data Exchange (ETDEWEB)
Kim, Jieun; Oh, Deog Yeon; Seul, Kwang-Won; Lee, Jin Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2016-10-15
KINS has used KINS-REM (KINS-Realistic Evaluation Methodology) which developed for Best- Estimate (BE) calculation and uncertainty quantification for regulatory audit. This methodology has been improved continuously by numerous studies, such as uncertainty parameters and uncertainty ranges. In this study, to evaluate the applicability of improved KINS-REM for OPR1000 plant, uncertainty evaluation with multi-dimensional model for confirming multi-dimensional phenomena was conducted with MARS-KS code. In this study, the uncertainty evaluation with multi- dimensional model of OPR1000 plant was conducted for confirming the applicability of improved KINS- REM The reactor vessel modeled using MULTID component of MARS-KS code, and total 29 uncertainty parameters were considered by 124 sampled calculations. Through 124 calculations using Mosaique program with MARS-KS code, peak cladding temperature was calculated and final PCT was determined by the 3rd order Wilks' formula. The uncertainty parameters which has strong influence were investigated by Pearson coefficient analysis. They were mostly related with plant operation and fuel material properties. Evaluation results through the 124 calculations and sensitivity analysis show that improved KINS-REM could be reasonably applicable for uncertainty evaluation with multi-dimensional model calculations of OPR1000 plants.
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Directory of Open Access Journals (Sweden)
Danuta Owczarek
2015-08-01
Full Text Available The paper presents a method for estimating the uncertainty of optical coordinate measurement based on the use of information about the geometry and the size of measured object as well as information about the measurement system, i.e. maximum permissible error (MPE of the machine, selection of a sensor, and also the required measurement accuracy, the number of operators, measurement strategy and external conditions contained in the developed uncertainty database. Estimation of uncertainty is done with the use of uncertainties of measurements of basic geometry elements determined by methods available in the Laboratory of Coordinate Metrology at Cracow University of Technology (LCM CUT (multi-position, comparative and developed in the LCM CUT method dedicated for non-contact measurements and then with the use of them to determine the uncertainty of a given measured object. Research presented in this paper are aimed at developing a complete database containing all information needed to estimate the measurement uncertainty of various objects, even of a very complex geometry based on previously performed measurements.
Uncertainty during breast diagnostic evaluation: state of the science.
Montgomery, Mariann
2010-01-01
To present the state of the science on uncertainty in relationship to the experiences of women undergoing diagnostic evaluation for suspected breast cancer. Published articles from Medline, CINAHL, PubMED, and PsycINFO from 1983-2008 using the following key words: breast biopsy, mammography, uncertainty, reframing, inner strength, and disruption. Fifty research studies were examined with all reporting the presence of anxiety persisting throughout the diagnostic evaluation until certitude is achieved through the establishment of a definitive diagnosis. Indirect determinants of uncertainty for women undergoing breast diagnostic evaluation include measures of anxiety, depression, social support, emotional responses, defense mechanisms, and the psychological impact of events. Understanding and influencing the uncertainty experience have been suggested to be key in relieving psychosocial distress and positively influencing future screening behaviors. Several studies examine correlational relationships among anxiety, selection of coping methods, and demographic factors that influence uncertainty. A gap exists in the literature with regard to the relationship of inner strength and uncertainty. Nurses can be invaluable in assisting women in coping with the uncertainty experience by providing positive communication and support. Nursing interventions should be designed and tested for their effects on uncertainty experienced by women undergoing a breast diagnostic evaluation.
Bias and robustness of uncertainty components estimates in transient climate projections
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Višnjevac Nenad; Cvijetinović Željko; Bajat Branislav; Radić Boris; Ristić Ratko; Milčanović Vukašin
2013-01-01
Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simul...
Data driven uncertainty evaluation for complex engineered system design
Liu, Boyuan; Huang, Shuangxi; Fan, Wenhui; Xiao, Tianyuan; Humann, James; Lai, Yuyang; Jin, Yan
2016-09-01
Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail. The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carried out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design.
Survey and Evaluate Uncertainty Quantification Methodologies
Energy Technology Data Exchange (ETDEWEB)
Lin, Guang; Engel, David W.; Eslinger, Paul W.
2012-02-01
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon
Energy Technology Data Exchange (ETDEWEB)
Kristof, Marian [AiNS, Na hlinach 51, 917 01 Trnava (Slovakia); Department of Nuclear Physics and Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava (Slovakia)], E-mail: marian.kristof@ains.sk; Kliment, Tomas [VUJE a.s., Okruzna 5, 918 64 Trnava (Slovakia); Petruzzi, Alessandro [Nuclear Research Group of San Piero a Grado, University of Pisa (Italy); Lipka, Jozef [Department of Nuclear Physics and Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava (Slovakia)
2009-11-15
Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data.
Shao, Kan; Gift, Jeffrey S
2014-01-01
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the "hybrid" method proposed by Crump, two strategies of BMA, including both "maximum likelihood estimation based" and "Markov Chain Monte Carlo based" methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose-response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose-response data. © 2013 Society for Risk Analysis.
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
Todini, E.
2001-01-01
This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the...
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Energy Technology Data Exchange (ETDEWEB)
Helgesson, P., E-mail: petter.helgesson@physics.uu.se [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Sjöstrand, H. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Rydén, J. [Department of Mathematics, Uppsala University, Uppsala (Sweden); Rochman, D. [Paul Scherrer Institute PSI, Villigen (Switzerland); Alhassan, E.; Pomp, S. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden)
2016-01-21
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.
2015-03-01
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Czech Academy of Sciences Publication Activity Database
Yu, X.; Lamačová, Anna; Duffy, Ch.; Krám, P.; Hruška, Jakub
2016-01-01
Roč. 90, part B (2016), s. 90-101 ISSN 0098-3004 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : Uncertainty * Evapotranspiration * Forest management * PIHM * Biome -BGC Subject RIV: EH - Ecology, Behaviour Impact factor: 2.533, year: 2016
Dalla Chiara, Maria Luisa
2010-09-01
In contemporary science uncertainty is often represented as an intrinsic feature of natural and of human phenomena. As an example we need only think of two important conceptual revolutions that occurred in physics and logic during the first half of the twentieth century: (1) the discovery of Heisenberg's uncertainty principle in quantum mechanics; (2) the emergence of many-valued logical reasoning, which gave rise to so-called 'fuzzy thinking'. I discuss the possibility of applying the notions of uncertainty, developed in the framework of quantum mechanics, quantum information and fuzzy logics, to some problems of political and social sciences.
Directory of Open Access Journals (Sweden)
Gurkan eSin
2015-02-01
Full Text Available Capital investment, next to the product demand, sales and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early stage design is a challenging task. This is especially important in biorefinery research, where available information and experiences with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs (a Bootstrapping as a regression method when cost data is available and (b the Monte Carlo technique as an error propagation method based on expert input when cost data is not available. Four well-known models for early stage cost estimation are reviewed an analyzed using the methodology. The significance of uncertainties of cost data for early stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision making under uncertainties. One of the results using an order-of-magnitude estimate shows that the production of diethyl ether and 1,3-butadiene are the most promising with economic risks of 0.24 MM$/a and 4.6 MM$/a due to uncertainties in cost estimations, respectively.
Improving uncertainty estimates: Inter-annual variability in Ireland
Pullinger, D.; Zhang, M.; Hill, N.; Crutchley, T.
2017-11-01
This paper addresses the uncertainty associated with inter-annual variability used within wind resource assessments for Ireland in order to more accurately represent the uncertainties within wind resource and energy yield assessments. The study was undertaken using a total of 16 ground stations (Met Eireann) and corresponding reanalysis datasets to provide an update to previous work on this topic undertaken nearly 20 years ago. The results of the work demonstrate that the previously reported 5.4% of wind speed inter-annual variability is considered to be appropriate, guidance is given on how to provide a robust assessment of IAV using available sources of data including ground stations, MERRA-2 and ERA-Interim.
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Barbina, Maria; Fajgelj, Ales; Jacimovic, Radojko; Jeran, Zvonka; Menegon, Sandro; Pati, Alessandra; Petruzzelli, Giannantonio; Sansone, Umberto; Van der Perk, Marcel
2008-01-01
In the frame of the international SOILSAMP project, funded and coordinated by the National Environmental Protection Agency of Italy (APAT), uncertainties due to field soil sampling were assessed. Three different sampling devices were applied in an agricultural area using the same sampling protocol. Cr, Sc and Zn mass fractions in the collected soil samples were measured by k(0)-instrumental neutron activation analysis (k(0)-INAA). For each element-device combination the experimental variograms were calculated using geostatistical tools. The variogram parameters were used to estimate the standard uncertainty arising from sampling. The sampling component represents the dominant contribution of the measurement uncertainty with a sampling uncertainty to measurement uncertainty ratio ranging between 0.6 and 0.9. The approach based on the use of variogram parameters leads to uncertainty values of the sampling component in agreement with those estimated by replicate sampling approach.
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
Developing first time-series of land surface temperature from AATSR with uncertainty estimates
Ghent, Darren; Remedios, John
2013-04-01
Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3
Uncertainty evaluation in numerical modeling of complex devices
Cheng, X.; Monebhurrun, V.
2014-10-01
Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.
Directory of Open Access Journals (Sweden)
Ali P. Yunus
2016-04-01
Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.
Seidel, Dian J.; Ao, Chi O.; Li, Kun
2010-08-01
Planetary boundary layer (PBL) processes control energy, water, and pollutant exchanges between the surface and free atmosphere. However, there is no observation-based global PBL climatology for evaluation of climate, weather, and air quality models or for characterizing PBL variability on large space and time scales. As groundwork for such a climatology, we compute PBL height by seven methods, using temperature, potential temperature, virtual potential temperature, relative humidity, specific humidity, and refractivity profiles from a 10 year, 505-station radiosonde data set. Six methods are directly compared; they generally yield PBL height estimates that differ by several hundred meters. Relative humidity and potential temperature gradient methods consistently give higher PBL heights, whereas the parcel (or mixing height) method yields significantly lower heights that show larger and more consistent diurnal and seasonal variations (with lower nighttime and wintertime PBLs). Seasonal and diurnal patterns are sometimes associated with local climatological phenomena, such as nighttime radiation inversions, the trade inversion, and tropical convection and associated cloudiness. Surface-based temperature inversions are a distinct type of PBL that is more common at night and in the morning than during midday and afternoon, in polar regions than in the tropics, and in winter than other seasons. PBL height estimates are sensitive to the vertical resolution of radiosonde data; standard sounding data yield higher PBL heights than high-resolution data. Several sources of both parametric and structural uncertainty in climatological PBL height values are estimated statistically; each can introduce uncertainties of a few 100 m.
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Dai, Y.
2014-01-01
This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...... machines, can be applied to dimensional CT measurements. The method is based on repeated measurements carried out on a calibrated master piece. The master piece is a component of a dose engine from an insulin pen. Measurement uncertainties estimated from the repeated measurements of the master piece were...
Kenneth E. Skog; Kim Pingoud; James E. Smith
2004-01-01
A method is suggested for estimating additions to carbon stored in harvested wood products (HWP) and for evaluating uncertainty. The method uses data on HWP production and trade from several decades and tracks annual additions to pools of HWP in use, removals from use, additions to solid waste disposal sites (SWDS), and decay from SWDS. The method is consistent with...
Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.
Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja
2015-06-01
Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.
Leaf area index uncertainty estimates for model-data fusion applications
Andrew D. Richardson; D. Bryan Dail; D.Y. Hollinger
2011-01-01
Estimates of data uncertainties are required to integrate different observational data streams as model constraints using model-data fusion. We describe an approach with which random and systematic uncertainties in optical measurements of leaf area index [LAI] can be quantified. We use data from a measurement campaign at the spruce-dominated Howland Forest AmeriFlux...
A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.
2001-08-24
This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.
Inaccuracy and Uncertainty in Estimates of College Student Suicide Rates.
Schwartz, Allan J.
1980-01-01
Innacurate sample data and uncertain estimates are defined as obstacles to assessing the suicide rate among college students. A standardization of research and reporting services is recommended. (JMF)
Evaluation of Performance and Uncertainty of Infrared Tympanic Thermometers
Directory of Open Access Journals (Sweden)
Wenbin Chung
2010-03-01
Full Text Available Infrared tympanic thermometers (ITTs are easy to use and have a quick response time. They are widely used for temperature measurement of the human body. The accuracy and uncertainty of measurement is the importance performance indicator for these meters. The performance of two infrared tympanic thermometers, Braun THT-3020 and OMRON MC-510, were evaluated in this study. The cell of a temperature calibrator was modified to serve as the standard temperature of the blackbody. The errors of measurement for the two meters were reduced by the calibration equation. The predictive values could meet the requirements of the ASTM standard. The sources of uncertainty include the standard deviations of replication at fixed temperature or the predicted values of calibration equation, reference standard values and resolution. The uncertainty analysis shows that the uncertainty of calibration equation is the main source for combined uncertainty. Ambient temperature did not have the significant effects on the measured performance. The calibration equations could improve the accuracy of ITTs. However, these equations did not improve the uncertainty of ITTs.
Using interpolation to estimate system uncertainty in gene expression experiments.
Directory of Open Access Journals (Sweden)
Lee J Falin
Full Text Available The widespread use of high-throughput experimental assays designed to measure the entire complement of a cell's genes or gene products has led to vast stores of data that are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured intervals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within unmeasured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative (i.e. non-categorical treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements.
Employing Sensitivity Derivatives to Estimate Uncertainty Propagation in CFD
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III
2004-01-01
Two methods that exploit the availability of sensitivity derivatives are successfully employed to predict uncertainty propagation through Computational Fluid Dynamics (CFD) code for an inviscid airfoil problem. An approximate statistical second-moment method and a Sensitivity Derivative Enhanced Monte Carlo (SDEMC) method are successfully demonstrated on a two-dimensional problem. First- and second-order sensitivity derivatives of code output with respect to code input are obtained through an efficient incremental iterative approach. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables); these sensitivity derivatives enable one to formulate first- and second-order Taylor Series approximations for the mean and variance of CFD output quantities. Additionally, incorporation of the first-order sensitivity derivatives into the data reduction phase of a conventional Monte Carlo (MC) simulation allows for improved accuracy in determining the first moment of the CFD output. Both methods are compared to results generated using a conventional MC method. The methods that exploit the availability of sensitivity derivatives are found to be valid when considering small deviations from input mean values.
Uncertainty Analysis of Radar and Gauge Rainfall Estimates in the Russian River Basin
Cifelli, R.; Chen, H.; Willie, D.; Reynolds, D.; Campbell, C.; Sukovich, E.
2013-12-01
Radar Quantitative Precipitation Estimation (QPE) has been a very important application of weather radar since it was introduced and made widely available after World War II. Although great progress has been made over the last two decades, it is still a challenging process especially in regions of complex terrain such as the western U.S. It is also extremely difficult to make direct use of radar precipitation data in quantitative hydrologic forecasting models. To improve the understanding of rainfall estimation and distributions in the NOAA Hydrometeorology Testbed in northern California (HMT-West), extensive evaluation of radar and gauge QPE products has been performed using a set of independent rain gauge data. This study focuses on the rainfall evaluation in the Russian River Basin. The statistical properties of the different gridded QPE products will be compared quantitatively. The main emphasis of this study will be on the analysis of uncertainties of the radar and gauge rainfall products that are subject to various sources of error. The spatial variation analysis of the radar estimates is performed by measuring the statistical distribution of the radar base data such as reflectivity and by the comparison with a rain gauge cluster. The application of mean field bias values to the radar rainfall data will also be described. The uncertainty analysis of the gauge rainfall will be focused on the comparison of traditional kriging and conditional bias penalized kriging (Seo 2012) methods. This comparison is performed with the retrospective Multisensor Precipitation Estimator (MPE) system installed at the NOAA Earth System Research Laboratory. The independent gauge set will again be used as the verification tool for the newly generated rainfall products.
Estimating Uncertainties in the Multi-Instrument SBUV Profile Ozone Merged Data Set
Frith, Stacey; Stolarski, Richard
2015-01-01
The MOD data set is uniquely qualified for use in long-term ozone analysis because of its long record, high spatial coverage, and consistent instrument design and algorithm. The estimated MOD uncertainty term significantly increases the uncertainty over the statistical error alone. Trends in the post-2000 period are generally positive in the upper stratosphere, but only significant at 1-1.6 hPa. Remaining uncertainties not yet included in the Monte Carlo model are Smoothing Error ( 1 from 10 to 1 hPa) Relative calibration uncertainty between N11 and N17Seasonal cycle differences between SBUV records.
Evaluation of peaking factors uncertainty for CASMO-3
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Suk; Song, Jae Seung; Kim, Yong Rae; Ji, Seong Kyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1996-02-01
This document evaluates the pin-to-box factor uncertainty based on using the CASMO-3 with 40-group J-library. Five CE criticals performed by Westinghouse, two by B and W and four RPI criticals were analyzed, using cross sections by CASMO-3. DOT was used for the core calculation. THis is one hof series of efforts to verify ADONIS procedure which is a new core design package under development by KAERI. The expected outcome of this analysis is CASMO-3 pin peak uncertainty applicable to CE type fuel assembly design. The evaluated uncertainty of peaking factors for CASMO-3 was 1.863%. 21 tabs., 23 figs., 12 refs. (Author) .new.
Estimation of Data Uncertainty Adjustment Parameters for Multivariate Earth Rotation Series
Sung, Li-yu; Steppe, J. Alan
1994-01-01
We have developed a maximum likelihood method to estimate a set of data uncertainty adjustment parameters, iccluding scaling factors and additive variances and covariances, for multivariate Earth rotation series.
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Evaluation of incremental reactivity and its uncertainty in Southern California.
Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G
2003-04-15
The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...
Estimating the uncertainty of the liquid mass flow using the orifice plate
Directory of Open Access Journals (Sweden)
Golijanek-Jędrzejczyk Anna
2017-01-01
Full Text Available The article presents estimation of measurement uncertainty of a liquid mass flow using the orifice plate. This subject is essential because of the widespread use of this type of flow meters, which renders not only the quantitative estimation but also qualitative results of this type so those measurements are important. To achieve this goal, the authors of the paper propose to use the theory of uncertainty. The article shows the analysis of the measurement uncertainty using two methods: one based on the “Guide to the expression of uncertainty in measurement” (GUM of the International Organization for Standardization with the use of the law of propagation of uncertainty, and the second one using the Monte Carlo numerical method. The paper presents a comparative analysis of the results obtained with both of these methods.
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
Energy Technology Data Exchange (ETDEWEB)
Reda, I.
2011-07-01
The uncertainty of measuring solar irradiance is fundamentally important for solar energy and atmospheric science applications. Without an uncertainty statement, the quality of a result, model, or testing method cannot be quantified, the chain of traceability is broken, and confidence cannot be maintained in the measurement. Measurement results are incomplete and meaningless without a statement of the estimated uncertainty with traceability to the International System of Units (SI) or to another internationally recognized standard. This report explains how to use International Guidelines of Uncertainty in Measurement (GUM) to calculate such uncertainty. The report also shows that without appropriate corrections to solar measuring instruments (solar radiometers), the uncertainty of measuring shortwave solar irradiance can exceed 4% using present state-of-the-art pyranometers and 2.7% using present state-of-the-art pyrheliometers. Finally, the report demonstrates that by applying the appropriate corrections, uncertainties may be reduced by at least 50%. The uncertainties, with or without the appropriate corrections might not be compatible with the needs of solar energy and atmospheric science applications; yet, this report may shed some light on the sources of uncertainties and the means to reduce overall uncertainty in measuring solar irradiance.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Varvia, Petri; Rautiainen, Miina; Seppänen, Aku
2017-04-01
Hyperspectral remote sensing data carry information on the leaf area index (LAI) of forests, and thus in principle, LAI can be estimated based on the data by inverting a forest reflectance model. However, LAI is usually not the only unknown in a reflectance model; especially, the leaf spectral albedo and understory reflectance are also not known. If the uncertainties of these parameters are not accounted for, the inversion of a forest reflectance model can lead to biased estimates for LAI. In this paper, we study the effects of reflectance model uncertainties on LAI estimates, and further, investigate whether the LAI estimates could recover from these uncertainties with the aid of Bayesian inference. In the proposed approach, the unknown leaf albedo and understory reflectance are estimated simultaneously with LAI from hyperspectral remote sensing data. The feasibility of the approach is tested with numerical simulation studies. The results show that in the presence of unknown parameters, the Bayesian LAI estimates which account for the model uncertainties outperform the conventional estimates that are based on biased model parameters. Moreover, the results demonstrate that the Bayesian inference can also provide feasible measures for the uncertainty of the estimated LAI.
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Directory of Open Access Journals (Sweden)
Dengsheng Lu
2012-01-01
Full Text Available Landsat Thematic mapper (TM image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.
Entropy Evolution and Uncertainty Estimation with Dynamical Systems
Directory of Open Access Journals (Sweden)
X. San Liang
2014-06-01
Full Text Available This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.
Debate on Uncertainty in Estimating Bathing Water Quality
DEFF Research Database (Denmark)
Larsen, Torben
1992-01-01
Estimating the bathing water quality along the shore near a planned sewage discharge requires data on the source strength of bacteria, the die-off of bacteria and the actual dilution of the sewage. Together these 3 factors give the actual concentration of bacteria on the interesting spots...
BOOK REVIEW: Evaluating the Measurement Uncertainty: Fundamentals and practical guidance
Lira, Ignacio
2003-08-01
Evaluating the Measurement Uncertainty is a book written for anyone who makes and reports measurements. It attempts to fill the gaps in the ISO Guide to the Expression of Uncertainty in Measurement, or the GUM, and does a pretty thorough job. The GUM was written with the intent of being applicable by all metrologists, from the shop floor to the National Metrology Institute laboratory; however, the GUM has often been criticized for its lack of user-friendliness because it is primarily filled with statements, but with little explanation. Evaluating the Measurement Uncertainty gives lots of explanations. It is well written and makes use of many good figures and numerical examples. Also important, this book is written by a metrologist from a National Metrology Institute, and therefore up-to-date ISO rules, style conventions and definitions are correctly used and supported throughout. The author sticks very closely to the GUM in topical theme and with frequent reference, so readers who have not read GUM cover-to-cover may feel as if they are missing something. The first chapter consists of a reprinted lecture by T J Quinn, Director of the Bureau International des Poids et Mesures (BIPM), on the role of metrology in today's world. It is an interesting and informative essay that clearly outlines the importance of metrology in our modern society, and why accurate measurement capability, and by definition uncertainty evaluation, should be so important. Particularly interesting is the section on the need for accuracy rather than simply reproducibility. Evaluating the Measurement Uncertainty then begins at the beginning, with basic concepts and definitions. The third chapter carefully introduces the concept of standard uncertainty and includes many derivations and discussion of probability density functions. The author also touches on Monte Carlo methods, calibration correction quantities, acceptance intervals or guardbanding, and many other interesting cases. The book goes
Energy Technology Data Exchange (ETDEWEB)
Miller, C.; Little, C.A.
1982-08-01
The purpose is to summarize estimates based on currently available data of the uncertainty associated with radiological assessment models. The models being examined herein are those recommended previously for use in breeder reactor assessments. Uncertainty estimates are presented for models of atmospheric and hydrologic transport, terrestrial and aquatic food-chain bioaccumulation, and internal and external dosimetry. Both long-term and short-term release conditions are discussed. The uncertainty estimates presented in this report indicate that, for many sites, generic models and representative parameter values may be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, especially those from breeder reactors located in sites dominated by complex terrain and/or coastal meteorology, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under these circumstances to reduce this uncertainty. However, even using site-specific information, natural variability and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose or concentration in environmental media following shortterm releases.
Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE)
2011-12-01
Linking the BBN to Existing Cost Estimation Models 38 3.8 Mapping BBN Outputs to the COCOMO Inputs 38 3.9 Monte Carlo Simulation in the Cost...been particularly helpful at various stages of our research, including Michael Cullen , Rob Flowe, Jim Judy and Keith Miller. The same is so for Adam...The method described in this report synthesizes scenario building, Bayesian Belief Network (BBN) modeling, and Monte Carlo simulation into an
Indirect methods of tree biomass estimation and their uncertainties ...
African Journals Online (AJOL)
Depending on data availability (dbh only or both dbh and total tree height) either of the models may be applied to generate satisfactory estimates of tree volume needed for planning and decision-making in management of mangrove forests. The study found an overall mean FF value of 0.65 ± 0.03 (SE), 0.56 ± 0.03 (SE) and ...
Directory of Open Access Journals (Sweden)
J. Florian Wellmann
2013-04-01
Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.
Han, Paul K. J.; Klein, William M. P.; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N.
2011-01-01
Objective To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates, and to identify factors that influence these effects. Methods Two web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as “ambiguity”) of the risk estimates, and was communicated using different representations of confidence intervals. Experiment 1 (n=240) tested the effects of ambiguity (confidence interval vs. point estimate) and representational format (textual vs. visual) on cancer risk perceptions and worry. Potential effect modifiers including personality type (optimism), numeracy, and the information’s perceived credibility were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n=135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Results Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of “ambiguity aversion.” This effect was moderated by representational format and dispositional optimism; textual (vs. visual) format and low (vs. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. Conclusions The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry—consistent with ambiguity aversion—and diminishing the influence of comparative risk information on risk
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by
Statistical characterization of roughness uncertainty and impact on wind resource estimation
DEFF Research Database (Denmark)
Kelly, Mark C.; Ejsing Jørgensen, Hans
2017-01-01
arising from differing wind-observation and turbine-prediction sites; this is done for the case of roughness bias as well as for the general case. For estimation of uncertainty in annual energy production (AEP), we also develop a generalized analytical turbine power curve, from which we derive a relation......In this work we relate uncertainty in background roughness length (z0) to uncertainty in wind speeds, where the latter are predicted at a wind farm location based on wind statistics observed at a different site. Sensitivity of predicted winds to roughness is derived analytically for the industry......-standard European Wind Atlas method, which is based on the geostrophic drag law. We statistically consider roughness and its corresponding uncertainty, in terms of both z0 derived from measured wind speeds as well as that chosen in practice by wind engineers. We show the combined effect of roughness uncertainty...
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence...... to the parameter estimation an uncertainty analysis of the estimated data and a comparison to other methods is performed. A thorough uncertainty analysis provides information about the prediction error, which is important for the use of the data in process safety studies and assessments. The method considers...
Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment
Woody encroachment is a globally occurring phenomenon that is thought to contribute significantly to the global carbon (C) sink. The C contribution needs to be estimated at regional and local scales to address large uncertainties present in the global- and continental-scale estimates and guide regio...
A novel method to estimate model uncertainty using machine learning techniques
Solomatine, D.P.; Lal Shrestha, D.
2009-01-01
A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Schoups, Gerrit; Vrugt, Jasper A.
2010-05-01
Estimation of parameter and predictive uncertainty of hydrologic models usually relies on the assumption of additive residual errors that are independent and identically distributed according to a normal distribution with a mean of zero and a constant variance. Here, we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. Parameter and predictive uncertainty are estimated by Monte Carlo Markov Chain sampling from a generalized likelihood function that accounts for correlation, heteroscedasticity, and non-normality of residual errors. Application to rainfall-runoff modeling using daily data from a humid basin reveals that: (i) residual errors are much better described by a heteroscedastic, first-order auto-correlated error model with a Laplacian density characterized by heavier tails than a Gaussian density, and (ii) proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and more physically realistic parameter estimates that are less sensitive to the particular time period used for inference. The latter is especially useful for regionalization and extrapolation of parameter values to ungauged basins. Application to daily rainfall-runoff data from a semi-arid basin shows that allowing skew in the error distribution yields improved estimates of predictive uncertainty when flows are close to zero.
Spectrum of shear modes in the neutron-star crust: Estimating the nuclear-physics uncertainties
Tews, Ingo
2016-01-01
I construct a model of the inner crust of neutron stars using interactions from chiral effective field theory (EFT) in order to calculate its equation of state (EOS), shear properties, and the spectrum of crustal shear modes. I systematically study uncertainties associated with the nuclear physics input, the crust composition, and neutron entrainment, and estimate their impact on crustal shear properties and the shear-mode spectrum. I find that the uncertainties originate mainly in two source...
Uncertainty Estimation due to Geometrical Imperfection and Wringing in Calibration of End Standards
Salah H. R. Ali; Ihab H. Naeim
2013-01-01
Uncertainty in gauge block measurement depends on three major areas, thermal effects, dimension metrology system that includes measurement strategy, and end standard surface perfection grades. In this paper, we focus precisely on estimating the uncertainty due to the geometrical imperfection of measuring surfaces and wringing gab in calibration of end standards grade 0. Optomechanical system equipped with Zygo measurement interferometer (ZMI-1000A) and AFM technique have been employed. A nove...
Lista, L
2004-01-01
A procedure to include the uncertainty on the background estimate for upper limit calculations using Poissonian sampling is presented for the case where a Gaussian assumption on the uncertainty can be made. Under that hypothesis an analytic expression of the likelihood is derived which can be written in terms of polynomials defined by recursion. This expression may lead to a significant speed up of computing applications that extract the upper limits using Toy Monte Carlo.
Ödén, Jakob; Eriksson, Kjell; Toma-Dasu, Iuliana
2017-06-01
The constant relative biological effectiveness (RBE) of 1.1 is typically assumed in proton therapy. This study presents a method of incorporating the variable RBE and its uncertainties into the proton plan robustness evaluation. The robustness evaluation was split into two parts. In part one, the worst-case physical dose was estimated using setup and range errors, including the fractionation dependence. The results were fed into part two, in which the worst-case RBE-weighted doses were estimated using a Monte Carlo method for sampling the input parameters of the chosen RBE model. The method was applied to three prostate, breast and head and neck (H&N) plans for several fractionation schedules using two RBE models. The uncertainties in the model parameters, linear energy transfer and α/β were included. The resulting DVH error bands were compared with the use of a constant RBE without uncertainties. All plans were evaluated as robust using the constant RBE. Applying the proposed methodology using the variable RBE models broadens the DVH error bands for all structures studied. The uncertainty in α/β was the dominant factor. The variable RBE also shifted the nominal DVHs towards higher doses for most OARs, whereas the direction of this shift for the clinical target volumes (CTVs) depended on the treatment site, RBE model and fractionation schedule. The average RBE within the CTV, using one of the RBE models and 2 Gy(RBE) per fraction, varied between 1.11-1.26, 1.06-1.16 and 1.14-1.25 for the breast, H&N and prostate patients, respectively. A method of incorporating RBE uncertainties into the robustness evaluation has been proposed. By disregarding the variable RBE and its uncertainties, the variation in the RBE-weighted CTV and OAR doses may be underestimated. This could be an essential factor to take into account, especially in normal tissue complication probabilities based comparisons between proton and photon plans.
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)
2016-10-15
It is well known that stress corrosion cracking (SCC) is one of the main material-related issues in operating nuclear reactors. To predict the initiation time of SCC, the Weibull distribution is widely used as a statistical model representing SCC reliability. The typical experimental procedure of an SCC initiation test involves an interval-censored cracking test with several specimens. From the result of the test, the experimenters can estimate the parameters of Weibull distribution by maximum likelihood estimation (MLE) or median rank regression (MRR). However, in order to obtain the sufficient accuracy of the Weibull estimators, it is hard for experimenters to determine the proper number of test specimens and censoring intervals. Therefore, in this work, the effects of some experimental conditions on estimation uncertainties of Weibull distribution were studied through the Monte Carlo simulation. The main goal of this work is to suggest quantitative estimation uncertainties for experimenters who want to develop probabilistic SCC initiation model by a cracking test. Widely used MRR and MLE are considered as estimation methods of Weibull distribution. By using a Monte Carlo simulation, uncertainties of MRR and ML estimators were quantified in various experimental cases. And we compared the uncertainties between the TDCI and TICI cases.
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with
Uncertainty analysis of an optical method for pressure estimation in fluid flows
Gomit, Guillaume; Acher, Gwenael; Chatellier, Ludovic; David, Laurent
2018-02-01
The analysis of the error propagation from the velocity field to the pressure field using the pressure estimation method proposed by Jeon et al (2015 11th Int. In Symp. Part. Image Velocim. PIV15) is achieved. The accuracy of the method is assessed based on numerical data. The flow around a rigid profile (NACA0015) with a free tip is considered. From the numerical simulation data, tomographic-PIV (TPIV)-like data are generated. Two types of error are used to distort the data: a Gaussian noise and a pixel-locking effect are modelled. Propagation of both types of error during the pressure estimation process and the effect of the TPIV resolution are evaluated. Results highlight the importance of the resolution to accurately estimate the pressure in presence of small structures but also to limit the propagation of error from the velocity to the pressure. The study of the sensitivity of the method for the two models of errors, Gaussian or pixel-locking, shows different trends. This reveals also the importance of the model of errors for the analysis of the uncertainties for PIV-based pressure.
Estimation and uncertainty analysis of dose response in an inter-laboratory experiment
Toman, Blaza; Rösslein, Matthias; Elliott, John T.; Petersen, Elijah J.
2016-02-01
An inter-laboratory experiment for the evaluation of toxic effects of NH2-polystyrene nanoparticles on living human cancer cells was performed with five participating laboratories. Previously published results from nanocytoxicity assays are often contradictory, mostly due to challenges related to producing a reliable cytotoxicity assay protocol for use with nanomaterials. Specific challenges include reproducibility preparing nanoparticle dispersions, biological variability from testing living cell lines, and the potential for nano-related interference effects. In this experiment, such challenges were addressed by developing a detailed experimental protocol and using a specially designed 96-well plate layout which incorporated a range of control measurements to assess multiple factors such as nanomaterial interference, pipetting accuracy, cell seeding density, and instrument performance. Detailed data analysis of these control measurements showed that good control of the experiments was attained by all participants in most cases. The main measurement objective of the study was the estimation of a dose response relationship between concentration of the nanoparticles and metabolic activity of the living cells, under several experimental conditions. The dose curve estimation was achieved by imbedding a three parameter logistic curve in a three level Bayesian hierarchical model, accounting for uncertainty due to all known experimental conditions as well as between laboratory variability in a top-down manner. Computation was performed using Markov Chain Monte Carlo methods. The fit of the model was evaluated using Bayesian posterior predictive probabilities and found to be satisfactory.
Stream flow - its estimation, uncertainty and interaction with groundwater and floodplains
DEFF Research Database (Denmark)
Poulsen, Jane Bang
, floodplain hydraulics and sedimentation patterns has been investigated along a restored channel section of Odense stream, Denmark. Collected samples of deposited sediment, organic matter and phosphorus on the floodplain were compared with results from a 2D dynamic flow model. Three stage dependent flow...... examines stream flow – its estimation, uncertainty and interaction with groundwater and floodplains. Impacts of temporally varying hydraulic flow conditions on uncertainties in stream flow estimation have been investigated in the Holtum and Skjern streams, Denmark. Continuous monitoring of stream flow...... velocities was used to detect hydraulic changes in stream roughness and geometry. A stage-velocity-discharge (QHV) relation has been developed which is a new approach for hydrograph estimation that allows for continuous adjustment of the hydrograph according to roughness changes in the stream. Uncertainties...
Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik
2012-01-01
A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.
Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.
2014-01-01
Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the limits of the proposed approximation, the conventional Kriging estimates are shown to be biased for all variograms, the bias depending upon the second order derivatives with respect to the parameters times the variance-covariance matrix of the parameter estimates. A new Maximum Likelihood (ML estimator for semi-variogram parameters in ordinary Kriging, based upon the assumption of a multi-normal distribution of the Kriging cross-validation errors, is introduced as a mean for the estimation of the parameter variance-covariance matrix. Keywords: Kriging, maximum likelihood, parameter estimation, uncertainty
Uncertainty evaluation method for axi-symmetric measurement machines
Directory of Open Access Journals (Sweden)
Muelaner Jody Emlyn
2016-01-01
Full Text Available This paper describes a method of uncertainty evaluation for axi-symmetric measurement machines. Specialized measuring machines for the inspection of axisymmetric components enable the measurement of properties such as roundness (radial runout, axial runout and coning. These machines typically consist of a rotary table and a number of contact measurement probes located on slideways. Sources of uncertainty include the probe calibration process, probe repeatability, probe alignment, geometric errors in the rotary table, the dimensional stability of the structure holding the probes and form errors in the reference hemisphere which is used to calibrate the system. The generic method is described and an evaluation of an industrial machine is described as a worked example. Expanded uncertainties, at 95% confidence, were then calculated for the measurement of; radial runout (1.2 μm with a plunger probe or 1.7 μm with a lever probe; axial runout (1.2 μm with a plunger probe or 1.5 μm with a lever probe; and coning/swash (0.44 arcseconds with a plunger probe or 0.60 arcseconds with a lever probe.
Lahiri, B. B.; Ranoo, Surojit; Philip, John
2017-11-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the
Uncertainty representation, quantification and evaluation for data and information fusion
CSIR Research Space (South Africa)
De Villiers, Johan P
2015-07-01
Full Text Available to be modelled), datum uncertainty (where uncertainty is introduced by representing real world information by a mathematical quantity), data generation abstraction (where uncertainty is introduced through a mathematical representation of the mapping between a...
Evaluating the uncertainty in measurement of occupational exposure with personal dosemeters.
van Dijk, J W E
2007-01-01
In the 1990 Recommendations of the ICRP it is stated that an uncertainty in a dose measured with a personal dosemeter under workplace conditions of a factor 1.5 in either direction 'will not be unusual'. In many documents similar to the EU Technical recommendations, the IAEA Safety Guides and papers in scientific journals, this statement is understood to be a basis for developing type-test criteria and criteria for the approval of dosimetric systems. The methods for evaluating the standard uncertainty as proposed in the above mentioned documents and in national and international standards use an approach that is based on the Law of Propagation of Uncertainties (LPU). This approach needs a number of assumptions, the validity of which cannot easily be verified for personal dosemeters. The current paper presents a numerical method based on Monte Carlo simulation for the calculation phase of the evaluation of uncertainties. The results of applying the method on the type-test data of the NRG TL-dosemeter indicate that the combined standard uncertainty estimated using the LPU approach might well not be realistic. The numerical method is simple and can be precisely formulated, making it suitable for being part of approval or accreditation procedures.
A New Uncertainty Evaluation Method and Its Application in Evaluating Software Quality
Directory of Open Access Journals (Sweden)
Jiqiang Chen
2014-01-01
Full Text Available Uncertainty theory is a branch of axiomatic mathematics dealing with experts’ belief degree. Considering the uncertainty with experts’ belief degree in the evaluation system and the different roles which different indices play in evaluating the overall goal with a hierarchical structure, a new comprehensive evaluation method is constructed based on uncertainty theory. First, index scores and weights of indices are described by uncertain variables and evaluation grades are described by uncertain sets. Second, weights of indices with respect to the overall goal are introduced. Third, a new uncertainty comprehensive evaluation method is constructed and proved to be a generalization of the weighted average method. Finally, an application is developed in evaluating software quality, which shows the effectiveness of the new method.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
Energy Technology Data Exchange (ETDEWEB)
Poppeliers, Christian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In this report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.
Uncertainty of feedback and state estimation determines the speed of motor adaptation
Directory of Open Access Journals (Sweden)
Kunlin Wei
2010-05-01
Full Text Available Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.
Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf
2014-09-01
Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters.
Uncertainty analysis for probabilistic pipe fracture evaluations in LBB applications
Energy Technology Data Exchange (ETDEWEB)
Rahman, S.; Ghadiali, N.; Wilkowski, G.
1997-04-01
During the NRC`s Short Cracks in Piping and Piping Welds Program at Battelle, a probabilistic methodology was developed to conduct fracture evaluations of circumferentially cracked pipes for application to leak-rate detection. Later, in the IPIRG-2 program, several parameters that may affect leak-before-break and other pipe flaw evaluations were identified. This paper presents new results from several uncertainty analyses to evaluate the effects of normal operating stresses, normal plus safe-shutdown earthquake stresses, off-centered cracks, restraint of pressure-induced bending, and dynamic and cyclic loading rates on the conditional failure probability of pipes. systems in BWR and PWR. For each parameter, the sensitivity to conditional probability of failure and hence, its importance on probabilistic leak-before-break evaluations were determined.
Energy Technology Data Exchange (ETDEWEB)
Heath, Garvin [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Warner, Ethan [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Steinberg, Daniel [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Brandt, Adam [Stanford Univ., CA (United States)
2015-08-01
A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH_{4} emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain.
Modelling and Measurement Uncertainty Estimation for Integrated AFM-CMM Instrument
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Bariani, Paolo; De Chiffre, Leonardo
2005-01-01
This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows compensa......This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows...... uncertainty of 0.8% was achieved for the case of surface mapping of 1.2*1.2 mm2 consisting of 49 single AFM scanned areas....
Directory of Open Access Journals (Sweden)
Adamczak Stanisław
2014-08-01
Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.
New estimates of silicate weathering rates and their uncertainties in global rivers
Moon, Seulgi; Chamberlain, C. P.; Hilley, G. E.
2014-06-01
This study estimated the catchment- and global-scale weathering rates of silicate rocks from global rivers using global compilation datasets from the GEMS/Water and HYBAM. These datasets include both time-series of chemical concentrations of major elements and synchronous discharge. Using these datasets, we first examined the sources of uncertainties in catchment and global silicate weathering rates. Then, we proposed future sampling strategies and geochemical analyses to estimate accurate silicate weathering rates in global rivers and to reduce uncertainties in their estimates. For catchment silicate weathering rates, we considered uncertainties due to sampling frequency and variability in river discharge, concentration, and attribution of weathering to different chemical sources. Our results showed that uncertainties in catchment-scale silicate weathering rates were due mostly to the variations in discharge and cation fractions from silicate substrates. To calculate unbiased silicate weathering rates accounting for the variations from discharge and concentrations, we suggest that at least 10 and preferably ∼40 temporal chemical data points with synchronous discharge from each river are necessary. For the global silicate weathering rate, we examined uncertainties from infrequent sampling within an individual river, the extrapolation from limited rivers to a global flux, and the inverse model selections for source differentiation. For this weathering rate, we found that the main uncertainty came from the extrapolation to the global flux and the model configurations of source differentiation methods. This suggests that to reduce the uncertainties in the global silicate weathering rates, coverage of synchronous datasets of river chemistry and discharge to rivers from tectonically active regions and volcanic provinces must be extended, and catchment-specific silicate end-members for those rivers must be characterized. With current available synchronous datasets, we
Elbers, J.A.; Jacobs, C.M.J.; Kruijt, B.; Jans, W.W.P.; Moors, E.J.
2011-01-01
Values for annual NEP of micrometeorological tower sites are usually published without an estimate of associated uncertainties. Few authors quantify total uncertainty of annual NEP. Moreover, different methods to assess total uncertainty are applied, usually addressing only one aspect of the
Xie, Y.; Cook, P. G.; Simmons, C. T.; Partington, D.; Crosbie, R.; Batelaan, O.
2016-12-01
Coupled soil-vegetation-atmosphere models have become increasingly popular for estimating groundwater recharge, because of the integration of carbon, energy and water balances. The carbon and energy balances act to constrain the water balance and as a result should reduce the uncertainty of groundwater recharge estimates. However, the addition of carbon and energy balances also introduces a large number of plant physiological parameters which complicates the estimation of groundwater recharge. Moreover, this method often relies on existing pedotransfer functions to derive soil water retention curve parameters and saturated hydraulic conductivity from soil attribute data. The choice of a pedotransfer function is usually subjective and several pedotransfer functions may be fit for the purpose. These different pedotransfer functions (and thus the uncertainty of soil water retention curve parameters and saturated hydraulic conductivity) are likely to increase the prediction uncertainty of recharge estimates. In this study, we aim to assess the potential uncertainty of groundwater recharge when using a coupled soil-vegetation-atmosphere modelling method. The widely used WAter Vegetation Energy and Solute (WAVES) modelling code was used to perform simulations of different water balances in order to estimate groundwater recharge in the Campaspe catchment in southeast Australia. We carefully determined the ranges of the vegetation parameters based upon a literature review. We also assessed a number of existing pedotransfer functions and selected the four most appropriate. Then the Monte Carlo analysis approach was employed to examine potential uncertainties introduced by different types of errors. Preliminary results suggest that for a mean rainfall of about 500 mm/y and annual pasture vegetation, the estimated recharge may range from 10 to 150 mm/y due to the uncertainty in vegetation parameters. This upper bound of the recharge range may double to 300 mm/y if different
Energy Technology Data Exchange (ETDEWEB)
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
Uncertainty of mass discharge estimation from contaminated sites at screening level
DEFF Research Database (Denmark)
Thomsen, Nanna Isbak; Troldborg, M.; McKnight, Ursula S.
that only the sites that present an actual risk are further investigated and perhaps later remediated. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from poorly characterised contaminant point sources on the local scale. Techniques for estimating dynamic uncertainty...... are not currently available for such sites. Mass discharge estimates (mass/time) have been proposed as a useful metric in risk assessment, because they provide an estimate of the impact of a contaminated site on a given water resource and allow for the comparison of impact between different sites. But mass...... (perchloroethylene) that has contaminated a clay till aquitard overlaying a limestone aquifer. The nature of the geology and the exact shape of the source are unknown. The decision factors in the Bayesian belief network for the site are presented. Model output is shown in the form of time varying mass discharge...
Certain uncertainty: using pointwise error estimates in super-resolution microscopy
Lindén, Martin; Amselem, Elias; Elf, Johan
2016-01-01
Point-wise localization of individual fluorophores is a critical step in super-resolution microscopy and single particle tracking. Although the methods are limited by the accuracy in localizing individual flourophores, this point-wise accuracy has so far only been estimated by theoretical best case approximations, disregarding for example motional blur, out of focus broadening of the point spread function and time varying changes in the fluorescence background. Here, we show that pointwise localization uncertainty can be accurately estimated directly from imaging data using a Laplace approximation constrained by simple mircoscope properties. We further demonstrate that the estimated localization uncertainty can be used to improve downstream quantitative analysis, such as estimation of diffusion constants and detection of changes in molecular motion patterns. Most importantly, the accuracy of actual point localizations in live cell super-resolution microscopy can be improved beyond the information theoretic lo...
Managing Uncertainty in ERP Project Estimation Practice: An Industrial Case Study
Daneva, Maia; Jedlitschka, A.; Salo, O.
2008-01-01
Uncertainty is a crucial element in managing projects. This paper’s aim is to shed some light into the issue of uncertain context factors when estimating the effort needed for implementing enterprise resource planning (ERP) projects. We outline a solution approach to this issue. It complementarily
Balancing uncertainty of context in ERP project estimation: an approach and a case study
Daneva, Maia
2010-01-01
The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -
Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S
2013-10-01
Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.
Kim, Ho Sung
2013-01-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…
Sebacher, B.; Hanea, R.G.; Heemink, A.
2013-01-01
In the past years, many applications of historymatching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Measuring Cross-Section and Estimating Uncertainties with the fissionTPC
Energy Technology Data Exchange (ETDEWEB)
Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-30
The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.
C. J. O'Donnell; Woodland, A D
1995-01-01
A model of producer behavior, which explicitly accounts for both output price and production uncertainty, is formulated and estimated. If the production technology is multiplicatively separable in its deterministic and stochastic components, then the expected utility maximization problem implies cost minimization for planned or expected output. Consequently, our empirical model of three lamb- and wool-producing sectors in Australia involves the estimation of a system of input cost share and c...
Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals
Energy Technology Data Exchange (ETDEWEB)
Rypdal, Kristin; Baritz, Rainer
2002-07-01
Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)
Energy Technology Data Exchange (ETDEWEB)
Habte, Aron M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-12-19
It is essential to apply a traceable and standard approach to determine the uncertainty of solar resource data. Solar resource data are used for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation, and to determine performance guarantees of solar energy conversion systems. These guarantees are based on the available solar resource derived from a measurement station or modeled data set such as the National Solar Radiation Database (NSRDB). Therefore, quantifying the uncertainty of these data sets provides confidence to financiers, developers, and site operators of solar energy conversion systems and ultimately reduces deployment costs. In this study, we implemented the Guide to the Expression of Uncertainty in Measurement (GUM) 1 to quantify the overall uncertainty of the NSRDB data. First, we start with quantifying measurement uncertainty, then we determine each uncertainty statistic of the NSRDB data, and we combine them using the root-sum-of-the-squares method. The statistics were derived by comparing the NSRDB data to the seven measurement stations from the National Oceanic and Atmospheric Administration's Surface Radiation Budget Network, National Renewable Energy Laboratory's Solar Radiation Research Laboratory, and the Atmospheric Radiation Measurement program's Southern Great Plains Central Facility, in Billings, Oklahoma. The evaluation was conducted for hourly values, daily totals, monthly mean daily totals, and annual mean monthly mean daily totals. Varying time averages assist to capture the temporal uncertainty of the specific modeled solar resource data required for each phase of a solar energy project; some phases require higher temporal resolution than others. Overall, by including the uncertainty of measurements of solar radiation made at ground stations, bias, and root mean square error, the NSRDB data demonstrated expanded uncertainty of 17 percent - 29 percent on hourly
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.
Marks, Harry M; Tohamy, Soumaya M; Tsui, Flora
2013-06-01
Because of numerous reported foodborne illness cases due to non-O157:H7 Shiga toxin-producing Escherichia coli (STEC) bacteria in the United States and elsewhere, interest in requiring better control of these pathogens in the food supply has increased. Successfully putting forth regulations depends upon cost-benefit analyses. Policy decisions often depend upon an evaluation of the uncertainty of the estimates used in such an analysis. This article presents an approach for estimating the uncertainties of estimated expected cost per illness and total annual costs of non-O157 STEC-related illnesses due to uncertainties associated with (i) recent FoodNet data and (ii) methodology proposed by Scallan et al. in 2011. The FoodNet data categorize illnesses regarding hospitalization and death. We obtained the illness-category costs from the foodborne illness cost calculator of the U.S. Department of Agriculture, Economic Research Service. Our approach for estimating attendant uncertainties differs from that of Scallan et al. because we used a classical bootstrap procedure for estimating uncertainty of an estimated parameter value (e.g., mean value), reflecting the design of the FoodNet database, whereas the other approach results in an uncertainty distribution that includes an extraneous contribution due to the underlying variability of the distribution of illnesses among different sites. For data covering 2005 through 2010, we estimate that the average cost per illness was about $450, with a 98% credible interval of $230 to $1,000. This estimate and range are based on estimations of about one death and 100 hospitalizations per 34,000 illnesses. Our estimate of the total annual cost is about $51 million, with a 98% credible interval of $19 million to $122 million. The uncertainty distribution for total annual cost is approximated well by a lognormal distribution, with mean and standard deviations for the log-transformed costs of 10.765 and 0.390, respectively.
Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas
Covián, E.; Puente, V.; Casero, M.
2017-10-01
The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying
Uncertainty in age-specific harvest estimates and consequences for white-tailed deer management
Collier, B.A.; Krementz, D.G.
2007-01-01
Age structure proportions (proportion of harvested individuals within each age class) are commonly used as support for regulatory restrictions and input for deer population models. Such use requires critical evaluation when harvest regulations force hunters to selectively harvest specific age classes, due to impact on the underlying population age structure. We used a stochastic population simulation model to evaluate the impact of using harvest proportions to evaluate changes in population age structure under a selective harvest management program at two scales. Using harvest proportions to parameterize the age-specific harvest segment of the model for the local scale showed that predictions of post-harvest age structure did not vary dependent upon whether selective harvest criteria were in use or not. At the county scale, yearling frequency in the post-harvest population increased, but model predictions indicated that post-harvest population size of 2.5 years old males would decline below levels found before implementation of the antler restriction, reducing the number of individuals recruited into older age classes. Across the range of age-specific harvest rates modeled, our simulation predicted that underestimation of age-specific harvest rates has considerable influence on predictions of post-harvest population age structure. We found that the consequence of uncertainty in harvest rates corresponds to uncertainty in predictions of residual population structure, and this correspondence is proportional to scale. Our simulations also indicate that regardless of use of harvest proportions or harvest rates, at either the local or county scale the modeled SHC had a high probability (>0.60 and >0.75, respectively) of eliminating recruitment into >2.5 years old age classes. Although frequently used to increase population age structure, our modeling indicated that selective harvest criteria can decrease or eliminate the number of white-tailed deer recruited into older
Ramsey, Michael H; Geelhoed, Bastiaan; Wood, Roger; Damant, Andrew P
2011-04-07
A realistic estimate of the uncertainty of a measurement result is essential for its reliable interpretation. Recent methods for such estimation include the contribution to uncertainty from the sampling process, but they only include the random and not the systematic effects. Sampling Proficiency Tests (SPTs) have been used previously to assess the performance of samplers, but the results can also be used to evaluate measurement uncertainty, including the systematic effects. A new SPT conducted on the determination of moisture in fresh butter is used to exemplify how SPT results can be used not only to score samplers but also to estimate uncertainty. The comparison between uncertainty evaluated within- and between-samplers is used to demonstrate that sampling bias is causing the estimates of expanded relative uncertainty to rise by over a factor of two (from 0.39% to 0.87%) in this case. General criteria are given for the experimental design and the sampling target that are required to apply this approach to measurements on any material. © The Royal Society of Chemistry 2011
Hauge, Ingrid Helen Ryste; Olerud, Hilde Merete
2012-01-01
The aim of this study was to reflect on the estimation of the mean glandular dose for women in Norway aged 50–69 y. Estimation of mean glandular dose (MGD) has been conducted by applying the method of Dance et al. (1990, 2000, 2009). Uncertainties in the thickness of approximately ±10 mm adds uncertainties in the MGD of approximately ±10 %, and uncertainty in the glandularity of ±0 % will lead to an uncertainty in the MGD of ±4 %. However, the inherent uncertainty in the air kerma, given by t...
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many....... In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit...
The combined method for uncertainty evaluation in electromagnetic radiation measurement
Directory of Open Access Journals (Sweden)
Kovačević Aleksandar M.
2014-01-01
Full Text Available Electromagnetic radiation of all frequencies represents one of the most common and fastest growing environmental influence. All populations are now exposed to varying degrees of electromagnetic radiation and the levels will continue to increase as technology advances. An electronic or electrical product should not generate electromagnetic radiation which may impact the environment. In addition, electromagnetic radiation measurement results need to be accompanied by quantitative statements about their accuracy. This is particularly important when decisions about product specifications are taken. This paper presents an uncertainty budget for disturbance power measurements of the equipment as part of electromagnetic radiation. We propose a model which uses a mixed distribution for uncertainty evaluation. The evaluation of the probability density function for the measurand has been done using the Monte Carlo method and a modified least-squares method (combined method. For illustration, this paper presents mixed distributions of two normal distributions, normal and rectangular, respectively. [Projekat Ministarstva nauke Republike Srbije, br. III 43009 i br. 171007
Measurement quality and uncertainty evaluation in civil engineering research
Directory of Open Access Journals (Sweden)
Silva Ribeiro A.
2013-01-01
Full Text Available Civil engineering is a branch of science that covers a broad range of areas where experimental procedures often plays an important role. The research in this field is usually supported by experimental structures able to test physical and mathematical models and to provide measurement results with acceptable accuracy. To assure measurement quality, a metrology probabilistic approach can provide valuable mathematical and computational tools especially suited to the study, evaluation and improvement of measurement processes in its different components (modeling, instrumentation performance, data processing, data validation and traceability, emphasizing measurement uncertainty evaluation as a tool to the analysis of results and to promote the quality and capacity associated with decision-making. This paper presents some of the research held by the metrology division of the Portuguese civil engineering research institutes, focused on the contribution of measurement uncertainty studies to a variety of frameworks, such as testing for metrological characterization and physical and mathematical modeling. Experimental data will be used to illustrate practical cases.
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Estimation of uncertainty in measurement of alkalinity using the GTC 51 guide
Alzate Rodríguez, Edwin Jhovany
2008-01-01
Este documento proporciona una guía para la estimación de la incertidumbre en el análisis de la alcalinidad en el agua, basada en la metodología de la ISO “Guía para la expresión de la Incertidumbre de Medición” (GTC 51). This document gives guidance for the estimation of uncertainty in the analysis of alkalinity in water, based on the approach taken in the ISO “Guide to the Expression of Uncertainty in Measurement”(GTC 51).
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Evaluating the power investment options with uncertainty in climate policy
Energy Technology Data Exchange (ETDEWEB)
Yang Ming [International Energy Agency, 9, rue de la Federation, F-75739 Paris Cedex 15 (France)], E-mail: ming.yang@iea.org; Blyth, William [Oxford Energy Associates, 28 Stile Road, Oxford OX3 8AQ (United Kingdom); Bradley, Richard [International Energy Agency, 9, rue de la Federation, F-75739 Paris Cedex 15 (France); Bunn, Derek [London Business School, Regent' s Park, London NW1 4SA (United Kingdom); Clarke, Charlie; Wilson, Tom [Electric Power Research Institute, 3420 Hillview Avenue, Palo Alto, California 94304 (United States)
2008-07-15
This paper uses a real options approach (ROA) for analysing the effects of government climate policy uncertainty on private investors' decision-making in the power sector. It presents an analysis undertaken by the International Energy Agency (IEA) that implements ROA within a dynamic programming approach for technology investment choice. Case studies for gas, coal and nuclear power investment are undertaken with the model. Illustrative results from the model indicate four broad conclusions: i) climate change policy risks can become large if there is only a short time between a future climate policy event such as post-2012 and the time when the investment decision is being made; ii) the way in which CO{sub 2} and fuel price variations feed through to electricity price variations is an important determinant of the overall investment risk that companies will face; iii) investment risks vary according to the technology being considered, with nuclear power appearing to be particularly exposed to fuel and CO{sub 2} price risks under various assumptions; and iv) the government will be able to reduce investors' risks by implementing long-term (say 10 years) rather than short-term (say 5 years) climate change policy frameworks. Contributions of this study include: (1) having created a step function with stochastic volume of jump at a particular time to simulate carbon price shock under a particular climate policy event; (2) quantifying the implicit risk premium of carbon price uncertainty to investors in new capacity; (3) evaluating carbon price risk alongside energy price risk in investment decision-making; and (4) demonstrating ROA to be a useful tool to quantify the impacts of climate change policy uncertainty on power investment.
Evaluation and attribution of OCO-2 XCO2 uncertainties
Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin
2017-07-01
Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.
Freni, Gabriele; Mannina, Giorgio
In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
Boumans, M.
2013-01-01
This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation.
Improving uncertainty estimation in urban hydrological modeling by statistically describing bias
Directory of Open Access Journals (Sweden)
D. Del Giudice
2013-10-01
Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on
Energy Technology Data Exchange (ETDEWEB)
Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson
2007-11-01
The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties.
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Estimation of the measurement uncertainty of methamphetamine and amphetamine in hair analysis.
Lee, Sooyeun; Park, Yonghoon; Yang, Wonkyung; Han, Eunyoung; Choe, Sanggil; Lim, Miae; Chung, Heesun
2009-03-10
The measurement uncertainties (MUs) were estimated for the determination of methamphetamine (MA) and its main metabolite, amphetamine (AP) at the low concentrations (around the cut-off value of MA) in human hair according to the recommendations of the EURACHEM/CITAC Guide and "Guide to the expression of uncertainty in measurement (GUM)". MA and AP were extracted by agitating hair with 1% HCl in methanol, followed by derivatization and quantification using GC-MS. The major components contributing to their uncertainties were the amount of MA or AP in the test sample, the weight of the test sample and the method precision, based on the equation to calculate the mesurand from intermediate values. Consequently, the concentrations of MA and AP in the hair sample with their expanded uncertainties were 0.66+/-0.05 and 1.01+/-0.06 ng/mg, respectively, which were acceptable to support the successful application of the analytical method. The method precision and the weight of the hair sample gave the largest contribution to the overall combined uncertainties of MA and AP, for each.
Evaluation of Sources of Uncertainties in Solar Resource Measurement
Energy Technology Data Exchange (ETDEWEB)
Habte, Aron M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-25
This poster presents a high-level overview of sources of uncertainties in solar resource measurement, demonstrating the impact of various sources of uncertainties -- such as cosine response, thermal offset, spectral response, and others -- on the accuracy of data from several radiometers. The study provides insight on how to reduce the impact of some of the sources of uncertainties.
New product development projects evaluation under time uncertainty
Directory of Open Access Journals (Sweden)
Thiago Augusto de Oliveira Silva
2009-12-01
Full Text Available The development time is one of the key factors that contribute to the new product development success. In spite of that, the impact of the time uncertainty on the development has been not fully exploited, as far as decision supporting models to evaluate this kind of projects is concerned. In this context, the objective of the present paper is to evaluate the development process of new technologies under time uncertainty. We introduce a model which captures this source of uncertainty and develop an algorithm to evaluate projects that incorporates Monte Carlo Simulation and Dynamic Programming. The novelty in our approach is to thoroughly blend the stochastic time with a formal approach to the problem, which preserves the Markov property. We base our model on the distinction between the decision epoch and the stochastic time. We discuss and illustrate the applicability of our model through an empirical example.O tempo de desenvolvimento é um dos fatores-chave que contribuem para o sucesso do desenvolvimento de novos produtos. Apesar disso, o impacto da incerteza de tempo no desenvolvimento tem sido pouco considerado em modelos de avaliação e valoração deste tipo de projetos. Neste contexto, este trabalho tem como objetivo avaliar projetos de desenvolvimento de novas tecnologias mediante o tempo incerto. Introduzimos um modelo capaz de captar esta fonte de incerteza e desenvolvemos um algoritmo para a valoração do projeto que integra Simulação de Monte Carlo e Programação Dinâmica. A novidade neste trabalho é conseguir integrar meticulosamente o tempo estocástico a uma estrutura formal para tomada de decisão que preserva a propriedade de Markov. O principal ponto para viabilizar este fato é distinção entre o momento de revisão e o tempo estocástico. Ilustramos e discutimos a aplicabilidade deste modelo por meio de um exemplo empírico.
Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.
2013-12-01
This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter
Methodology evaluation of innovative projects under risk and uncertainty
Directory of Open Access Journals (Sweden)
2012-09-01
Full Text Available This article deals with problems connected with the assessment of innovative projects in the context of risk and uncertainty, topical issues of evaluation of innovative projects at the present stage of development of the Russian economy. By the example of the solution of the "crossing the river" is considering the possibility of using hierarchical models to address it. In what follows, and compares the priorities of different groups of factors are given by calculating the overall costs and benefits. The paper provides a rationale for combined use of four aspects: the beneficial aspects of the decision (the benefits and opportunities and negative (costs and risks that may lead to the decision in question.
Briggs, Andrew H; Weinstein, Milton C; Fenwick, Elisabeth A L; Karnon, Jonathan; Sculpher, Mark J; Paltiel, A David
2012-01-01
A model's purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value of information analysis. The article also makes extensive recommendations around the reporting of uncertainty, in terms of both deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Inferring uncertainty from interval estimates: Effects of alpha level and numeracy
Directory of Open Access Journals (Sweden)
Luke F. Rinne
2013-05-01
Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.
Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng
2015-03-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
Villarini, Gabriele; Krajewski, Witold F.
2010-01-01
It is well acknowledged that there are large uncertainties associated with radar-based estimates of rainfall. Numerous sources of these errors are due to parameter estimation, the observational system and measurement principles, and not fully understood physical processes. Propagation of these uncertainties through all models for which radar-rainfall are used as input (e.g., hydrologic models) or as initial conditions (e.g., weather forecasting models) is necessary to enhance the understanding and interpretation of the obtained results. The aim of this paper is to provide an extensive literature review of the principal sources of error affecting single polarization radar-based rainfall estimates. These include radar miscalibration, attenuation, ground clutter and anomalous propagation, beam blockage, variability of the Z- R relation, range degradation, vertical variability of the precipitation system, vertical air motion and precipitation drift, and temporal sampling errors. Finally, the authors report some recent results from empirically-based modeling of the total radar-rainfall uncertainties. The bibliography comprises over 200 peer reviewed journal articles.
Directory of Open Access Journals (Sweden)
Carlos Peña
Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Estimating uncertainty and change in flood frequency for an urban Swedish catchment
Westerberg, I.; Persson, T.
2013-12-01
Floods are extreme events that occur rarely, which means that there are relatively few data of weather and flow conditions during flooding episodes for estimation of flood frequency, and that such estimates are necessarily uncertain. There is even less data available for estimation of changes in flood frequency as a result of changes in land use, climate or the morphometry of the watercourse. In this study we used a combination of monitoring and modelling to overcome the lack of reliable discharge data and allow us to characterise the flooding problems in the highly urbanised Riseberga Creek catchment in eastern Malmö, Sweden, as well as investigating their potential change in the future. The study is part of the GreenClimeAdapt project, in which local stakeholders and researchers work with finding and demonstrating solutions to the present flooding problems in the catchment as well as adaptation to future change. A high-resolution acoustic doppler discharge gauge was installed in the creek and a hydrologic model was set up to extend this short record for estimation of flood frequency. Discharge uncertainty was estimated based on a stage-discharge analysis and accounted for in model calibration together with uncertainties in the model parameterisation. The model was first used to study the flow variability during the 16 years with available climate input data. Then it was driven with long-term climate realisations from a statistical weather generator to estimate flood frequency for present climate and for future climate and land-use scenarios through continuous simulation. The uncertainty in the modelled flood-frequency for present climate was found to be important, and could partly be reduced in the future using longer monitoring records containing additional and higher flood episodes. The climate and land-use change scenarios are mainly useful for sensitivity analysis of different adaptation measures that can be taken to reduce the flooding problems, for which
Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise
2017-10-01
The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.
Sherriff, Sophie; Rowan, John; Franks, Stewart; Walden, John; Melland, Alice; Jordan, Phil; Fenton, Owen; hUallacháin, Daire Ó.
2014-05-01
Sediment fingerprinting techniques are being applied more frequently to inform soil and water management issues. Identification of sediment source areas and assessment of their relative contributions are essential in targeting cost-effective mitigation strategies. Sediment fingerprinting utilises natural sediment properties (e.g. chemical, magnetic, radiometric) to trace the contributions from different source areas by 'unmixing' a catchment outlet sample back to its constituent sources. Early qualitative approaches have been superseded by quantitative methodologies using multiple (composite) tracers coupled with linear programming. Despite the inclusion of fingerprinting results in environmental management strategies, techniques are subject to potentially significant uncertainties. Intra-source heterogeneity, although widely recognised as a source of uncertainty, is difficult to address, particularly in large study catchments, or where source collection is restricted. Inadequate characterisation may result in the translation of significant uncertainties to a group fingerprint and onward to contribution estimates. Franks and Rowan (2000) developed an uncertainty inclusive un-mixing model (FR2000+) based on Bayesian Monte-Carlo methods. Source area contributions are reported with confidence intervals which incorporate sampling and un-mixing uncertainties. Consequently the impact of uncertainty on the reliability of predictions can be considered. The aim of this study is to determine the impact of source area sampling resolution and spatial complexity on source area contribution estimates and their relative uncertainty envelope. High resolution source area sampling was conducted in a 10 km2 intensive grassland catchment in Co. Wexford, Ireland, according to potential field and non-field sources. Seven potential source areas were sampled; channel banks (n=55), road verges (n=44), topsoils (n=35), subsoils (n=32), tracks (n=6), drains (n=2) and eroding ditches (n=5
Xia, Youlong; Sen, Mrinal K.; Jackson, Charles S.; Stoffa, Paul L.
2004-10-01
This study evaluates the ability of Bayesian stochastic inversion (BSI) and multicriteria (MC) methods to search for the optimal parameter sets of the Chameleon Surface Model (CHASM) using prescribed forcing to simulate observed sensible and latent heat fluxes from seven measurement sites representative of six biomes including temperate coniferous forests, tropical forests, temperate and tropical grasslands, temperate crops, and semiarid grasslands. Calibration results with the BSI and MC show that estimated optimal values are very similar for the important parameters that are specific to the CHASM model. The model simulations based on estimated optimal parameter sets perform much better than the default parameter sets. Cross-validations for two tropical forest sites show that the calibrated parameters for one site can be transferred to another site within the same biome. The uncertainties of optimal parameters are obtained through BSI, which estimates a multidimensional posterior probability density function (PPD). Marginal PPD analyses show that nonoptimal choices of stomatal resistance would contribute most to model simulation errors at all sites, followed by ground and vegetation roughness length at six of seven sites. The impact of initial root-zone soil moisture and nonmosaic approach on estimation of optimal parameters and their uncertainties is discussed.
The impact of a and b value uncertainty on loss estimation in the reinsurance industry
Directory of Open Access Journals (Sweden)
R. Streit
2000-06-01
Full Text Available In the reinsurance industry different probabilistic models are currently used for seismic risk analysis. A credible loss estimation of the insured values depends on seismic hazard analysis and on the vulnerability functions of the given structures. Besides attenuation and local soil amplification, the earthquake occurrence model (often represented by the Gutenberg and Richter relation is a key element in the analysis. However, earthquake catalogues are usually incomplete, the time of observation is too short and the data themselves contain errors. Therefore, a and b values can only be estimated with uncertainties. The knowledge of their variation provides a valuable input for earthquake risk analysis, because they allow the probability distribution of expected losses (expressed by Average Annual Loss (AAL to be modelled. The variations of a and b have a direct effect on the estimated exceeding probability and consequently on the calculated loss level. This effect is best illustrated by exceeding probability versus loss level and AAL versus magnitude graphs. The sensitivity of average annual losses due to different a to b ratios and magnitudes is obvious. The estimation of the variation of a and b and the quantification of the sensitivity of calculated losses are fundamental for optimal earthquake risk management. Ignoring these uncertainties means that risk management decisions neglect possible variations of the earthquake loss estimations.
Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.
Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V
2016-01-01
The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
Estimation of Uncertainties in the Global Distance Test (GDT_TS for CASP Models.
Directory of Open Access Journals (Sweden)
Wenlin Li
Full Text Available The Critical Assessment of techniques for protein Structure Prediction (or CASP is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs, increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Towards rapid uncertainty estimation in linear finite fault inversion with positivity constraints
Benavente, R. F.; Cummins, P. R.; Sambridge, M.; Dettmer, J.
2015-12-01
Rapid estimation of the slip distribution for large earthquakes can assist greatly during the early phases of emergency response. These estimates can be used for rapid impact assessment and tsunami early warning. While model parameter uncertainties can be crucial for meaningful interpretation of such slip models, they are often ignored. Since the finite fault problem can be posed as a linear inverse problem (via the multiple time window method), an analytic expression for the posterior covariance matrix can be obtained, in principle. However, positivity constraints are often employed in practice, which breaks the assumption of a Gaussian posterior probability density function (PDF). To our knowledge, two solutions to this issue exist in the literature: 1) Not using positivity constraints (may lead to exotic slip patterns) or 2) to use positivity constraints but apply Bayesian sampling for the posterior. The latter is computationally expensive and currently unsuitable for rapid inversion. In this work, we explore an alternative approach in which we realize positivity by imposing a prior such that the log of each subfault scalar moment are smoothly distributed on the fault surface. This results in each scalar moment to be intrinsically non-negative while the posterior PDF can still be approximated as Gaussian. While the inversion is not linear anymore, we show that the most probable solution can be found by iterative methods which are less computationally expensive than numerical sampling of the posterior. In addition, the posterior covariance matrix (which provides uncertainties) can be estimated from the most probable solution, using an analytic expression for the Hessian of the cost function. We study this approach for both synthetic and observed W-phase data and the results suggest that a first order estimation of the uncertainty in the slip model can be obtained, therefore aiding in the interpretation of the slip distribution estimate.
Directory of Open Access Journals (Sweden)
N. N. Nekrasova
2016-01-01
Full Text Available Summary. This article proposed to estimate the technological parameters of mining and metallurgical industry (iron ore stocks, given the fuzzy set values in conditions of uncertainty using the balance sheet and industrial methods of calculation of reserves of ore. Due to the fact that the modeling of the processes of extraction of ore is associated with parameters of the equations that contain variables with different nature of uncertainty, it is better to provide all the information on a single formal language of fuzzy set theory. Thus, the proposed model calculation and evaluation of reserves of iron ore by different methods in conditions of uncertainty geological information on the basis of the theory of fuzzy sets. In this case the undefined values are interpreted as intentionally "fuzzy", since this approach largely corresponds to the real industrial situation than the interpretation of such quantities in terms of random. Taken into account the fact that the application of the probabilistic approach leads to the identification of uncertainty with randomness, but in practice, the basic nature of uncertainty in the calculation of reserves of iron ore is unclear. Under the proposed approach, each fuzzy parameter is a corresponding membership function, to determine which proposed using a General algorithm, as the result of algebraic operations on arbitrary membership function of the inverse numerical method. Because of the existence of many models describing the same production process in different methods (for example, the balance model or industrial model and under different assumptions proposed to coordinate such models on the basis of the model of aggregation of heterogeneous information. For matching this kind of information, its generalization and adjustment of the outcome parameters, it is expedient to use the apparatus of fuzzy set theory that allows to obtain quantitative characteristics of imprecisely specified parameters and make the
Influence of parameter estimation uncertainty in Kriging: Part 2 - Test and case study applications
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available The theoretical approach introduced in Part 1 is applied to a numerical example and to the case of yearly average precipitation estimation over the Veneto Region in Italy. The proposed methodology was used to assess the effects of parameter estimation uncertainty on Kriging estimates and on their estimated error variance. The Maximum Likelihood (ML estimator proposed in Part 1, was applied to the zero mean deviations from yearly average precipitation over the Veneto Region in Italy, obtained after the elimination of a non-linear drift with elevation. Three different semi-variogram models were used, namely the exponential, the Gaussian and the modified spherical, and the relevant biases as well as the increases in variance have been assessed. A numerical example was also conducted to demonstrate how the procedure leads to unbiased estimates of the random functions. One hundred sets of 82 observations were generated by means of the exponential model on the basis of the parameter values identified for the Veneto Region rainfall problem and taken as characterising the true underlining process. The values of parameter and the consequent cross-validation errors, were estimated from each sample. The cross-validation errors were first computed in the classical way and then corrected with the procedure derived in Part 1. Both sets, original and corrected, were then tested, by means of the Likelihood ratio test, against the null hypothesis of deriving from a zero mean process with unknown covariance. The results of the experiment clearly show the effectiveness of the proposed approach. Keywords: yearly rainfall, maximum likelihood, Kriging, parameter estimation uncertainty
Dead time effect on the Brewer measurements: correction and estimated uncertainties
Fountoulakis, Ilias; Redondas, Alberto; Bais, Alkiviadis F.; José Rodriguez-Franco, Juan; Fragkos, Konstantinos; Cede, Alexander
2016-04-01
Brewer spectrophotometers are widely used instruments which perform spectral measurements of the direct, the scattered and the global solar UV irradiance. By processing these measurements a variety of secondary products can be derived such as the total columns of ozone (TOC), sulfur dioxide and nitrogen dioxide and aerosol optical properties. Estimating and limiting the uncertainties of the final products is of critical importance. High-quality data have a lot of applications and can provide accurate estimations of trends.The dead time is specific for each instrument and improper correction of the raw data for its effect may lead to important errors in the final products. The dead time value may change with time and, with the currently used methodology, it cannot always be determined accurately. For specific cases, such as for low ozone slant columns and high intensities of the direct solar irradiance, the error in the retrieved TOC, due to a 10 ns change in the dead time from its value in use, is found to be up to 5 %. The error in the calculation of UV irradiance can be as high as 12 % near the maximum operational limit of light intensities. While in the existing documentation it is indicated that the dead time effects are important when the error in the used value is greater than 2 ns, we found that for single-monochromator Brewers a 2 ns error in the dead time may lead to errors above the limit of 1 % in the calculation of TOC; thus the tolerance limit should be lowered. A new routine for the determination of the dead time from direct solar irradiance measurements has been created and tested and a validation of the operational algorithm has been performed. Additionally, new methods for the estimation and the validation of the dead time have been developed and are analytically described. Therefore, the present study, in addition to highlighting the importance of the dead time for the processing of Brewer data sets, also provides useful information for their
Khider, D.; Ahn, S.; Lisiecki, L. E.; Lawrence, C. E.; Kienast, M.
2017-11-01
Understanding the mechanisms behind any changes in the climate system often requires establishing the timing of events imprinted on the geological record. However, these proxy records are prone to large uncertainties, which may preclude meaningful conclusions about the relative timing of events. In this study, we put forth a framework to estimate the uncertainty in phase relationships inferred from marine sedimentary records. The novelty of our method lies in the accounting of the various sources of uncertainty inherent to paleoclimate reconstruction and timing analysis. Specifically, we use a Monte Carlo process allowing sampling of possible realizations of the time series as functions of uncertainties in time, the climate proxy, and the identification of the termination timing. We then apply this technique to 15 published sea surface temperature records from the equatorial Pacific to evaluate whether we observed any significant changes in the termination timing between the east and the west. We find that the uncertainty on the relative timing estimates is on the order of several thousand years and mainly stems from age model uncertainty (90%). However, even small differences in mean termination timings can be detected with a sufficiently large number of samples. Improvements in the dating of sediment records provide an opportunity to reduce uncertainty in studies of this kind.
Evaluation of uncertainty of measurement for cellulosic fiber and ...
African Journals Online (AJOL)
DR OKE
correspondingly. The resulting expanded uncertainty exhibited levels in the margins of 20 %, which cautions for critical ... coefficients and determination of the expanded uncertainty. Consultation .... et al, 2010) who studied on the chemical modification effects on mechanical properties of high impact polystyrene. (a). (b). (c).
Till, John E.; Beck, Harold L.; Aanenson, Jill W.; Grogan, Helen A.; Mohler, H. Justin; Mohler, S. Shawn; Voillequé, Paul G.
2014-01-01
Methods were developed to calculate individual estimates of exposure and dose with associated uncertainties for a sub-cohort (1,857) of 115,329 military veterans who participated in at least one of seven series of atmospheric nuclear weapons tests or the TRINITY shot carried out by the United States. The tests were conducted at the Pacific Proving Grounds and the Nevada Test Site. Dose estimates to specific organs will be used in an epidemiological study to investigate leukemia and male breast cancer. Previous doses had been estimated for the purpose of compensation and were generally high-sided to favor the veteran's claim for compensation in accordance with public law. Recent efforts by the U.S. Department of Defense (DOD) to digitize the historical records supporting the veterans’ compensation assessments make it possible to calculate doses and associated uncertainties. Our approach builds upon available film badge dosimetry and other measurement data recorded at the time of the tests and incorporates detailed scenarios of exposure for each veteran based on personal, unit, and other available historical records. Film badge results were available for approximately 25% of the individuals, and these results assisted greatly in reconstructing doses to unbadged persons and in developing distributions of dose among military units. This article presents the methodology developed to estimate doses for selected cancer cases and a 1% random sample of the total cohort of veterans under study. PMID:24758578
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and
Cost benchmarking of railway projects in Europe – dealing with uncertainties in cost estimates
DEFF Research Database (Denmark)
Trabo, Inara
Past experiences in the construction of high-speed railway projects demontrate either positive or negative financial outcomes of the actual project’s budget. Usually some uncertainty value is included into initial budget calculations. Uncertainty is related to the increase of material prices......, difficulties during construction, financial difficulties of the company or mistakes in project initial budget estimation, etc. Such factors may influence the actual budget values and cause budget overruns. According to the research conducted by Prof. B. Flyvbjerg, related to investigation of budget in large......%, later on it was investigated that initial calculations and passenger forecasts were overestimated deliberately in order to get financial support from the government and perform this project. Apart from bad experiences there are also many projects with positive financial outcomes, e.g. French, Dutch...
Cede, Alexander; Luccini, Eduardo; Nuñez, Liliana; Piacentini, Rubén D; Blumthaler, Mario
2002-10-20
The erythemal radiometers of the Ultraviolet Monitoring Network of the Argentine Servicio Meteorológico Nacional were calibrated in an extensive in situ campaign from October 1998 to April 1999 with Austrian reference instruments. Methods to correct the influence of the location's horizon and long-term detector changes are applied. The different terms that contribute to the measurement uncertainty are analyzed. The expanded uncertainty is estimated to be +/- 10% at 70 degrees solar zenith angle (SZA) and +/-6% for a SZA of <50 degrees. We observed significant changes for some detectors over hours and days, reaching a maximum diurnal drift of +/-5% at a SZA of 70 degrees and a maximum weekly variation of +/-4%.
Varouchakis, Emmanouil; Hristopulos, Dionissios
2015-04-01
Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs
Hauge, I H R; Olerud, H M
2013-06-01
The aim of this study was to reflect on the estimation of the mean glandular dose for women in Norway aged 50-69 y. Estimation of mean glandular dose (MGD) has been conducted by applying the method of Dance et al. (1990, 2000, 2009). Uncertainties in the thickness of approximately ±10 mm adds uncertainties in the MGD of approximately ±10 %, and uncertainty in the glandularity of ±0 % will lead to an uncertainty in the MGD of ±4 %. However, the inherent uncertainty in the air kerma, given by the European protocol on dosimetry, will add an uncertainty of 12 %. The total uncertainty in the MGD is estimated to be ∼20 %, taking into consideration uncertainties in compressed breast thickness (±10 %), the air kerma (12 %), change in HVL by -0.05 mm (-9.0 %), uncertainty in the s-factor of ±2.1 % and changing the glandularity to an age-dependent glandularity distribution (+8.4 %).
NIS method for uncertainty estimation of airborne sound insulation measurement in field
Directory of Open Access Journals (Sweden)
El-Basheer Tarek M.
2017-01-01
Full Text Available In structures, airborne sound insulation is utilized to characterize the acoustic nature of barriers between rooms. However, the assessment of sound insulation index is once in a while troublesome or indeed, even questionable, both in field and laboratory measurements, notwithstanding the way that there are some unified measurement methodology indicated in the ISO 140 series standards. There are issues with the reproducibility and repeatability of the measurement results. A few troubles might be brought on by non-diffuse acoustic fields, non-uniform reverberation time, or blunders of the reverberation time measurements. Some minor issues are additionally postured by flanking transmission. In this paper, investigation of the uncertainties of the above specified measurement parts and their impact on the consolidated uncertainty in 1/3-octave frequency band. The total measurement uncertainty model contributes several different partial uncertainties, which are evaluated by the method of type A or type B. Also, the determination of the sound reduction index decided by ISO 140-4 has been performed.
CITRICULTURE ECONOMIC AND FINANCIAL EVALUATION UNDER CONDITIONS OF UNCERTAINTY
Directory of Open Access Journals (Sweden)
DANILO SIMÕES
2015-12-01
Full Text Available ABSTRACT The citriculture consists in several environmental risks, as weather changes and pests, and also consists in considerable financial risk, mainly due to the period ofreturn on the initial investment. This study was motivated by the need to assess the risks of a business activity such as citriculture. Our objective was to build a stochastic simulation model to achieve the economic and financial analysis of an orange producer in the Midwest region of the state of Sao Paulo, under conditions of uncertainty. The parameters used were the Net Present Value (NPV, the Modified Internal Rate of Return(MIRR, and the Discounted Payback. To evaluate the risk conditions we built a probabilistic model of pseudorandom numbers generated with Monte Carlo method. The results showed that the activity analyzed provides a risk of 42.8% to reach a NPV negative; however, the yield assessed by MIRR was 7.7%, higher than the yield from the reapplication of the positive cash flows. The financial investment pays itself after the fourteenth year of activity.
Ellipsoidal estimates of reachable sets of impulsive control problems under uncertainty
Matviychuk, O. G.
2017-10-01
For impulsive control systems with uncertainty in initial states and in the system parameters the problem of estimating reachable sets is studied. The initial states are taken to be unknown but bounded with given bounds. Also the matrix included in the differential equations of the system dynamics is uncertain and only bounds on admissible values of this matrix coefficients are known. The problem is considered under additional constraint on the system states. It is assumed that the system states should belong to the given ellipsoid in the state space. We present here the state estimation algorithms that use the special structure of the bilinear impulsive control system and take into account additional restrictions on states and controls. The algorithms are based on ellipsoidal techniques for estimating the trajectory tubes of uncertain dynamical systems.
Estimating uncertainty of the WMO mole fraction scale for carbon dioxide in air
Zhao, Cong Long; Tans, Pieter P.
2006-04-01
The current WMO CO2 Mole Fraction Scale consists of a set of 15 CO2-in-air primary standard calibration gases ranging in CO2 mole fraction from 250 to 520 μmol mol-1. Since the WMO CO2 Expert Group transferred responsibility for maintaining the WMO Scale from the Scripps Institution of Oceanography (SIO) to the Climate Monitoring and Diagnostics Laboratory (CMDL) in 1995, the 15 WMO primary standards have been calibrated, first at SIO and then at regular intervals, between 1 and 2 years, by the CMDL manometric system. The uncertainty of the 15 primary standards was estimated to be 0.069 μmol mol-1 (one-sigma) in the absolute sense. Manometric calibrations results indicate that there is no evidence of overall drift of the Primaries from 1996 to 2004. In order to lengthen the useful life of the Primary standards, CMDL has always transferred the scale via NDIR analyzers to the secondary standards. The uncertainties arising from the analyzer random error and the propagation error due to the uncertainty of the reference gas mole fraction are discussed. Precision of NDIR transfer calibrations was about 0.014 μmol mol-1 from 1979 to present. Propagation of the uncertainty was calculated theoretically. In the case of interpolation the propagation error was estimated to be between 0.06 and 0.07 μmol mol-1 when the Primaries were used as the reference gases via NDIR transfer calibrations. The CMDL secondary standard calibrations are transferred via NDIR analyzers to the working standards, which are used routinely for measuring atmospheric CO2 mole fraction in the WMO Global Atmosphere Watch monitoring program. The uncertainty of the working standards was estimated to be 0.071 μmol mol-1 in the one-sigma absolute scale. Consistency among the working standards is determined by the random errors of downward transfer calibrations at each level and is about 0.02 μmol mol-1. For comparison with an independent absolute scale, the five gravimetric standards from the National
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2017-03-01
Quantitative photoacoustic tomography seeks to estimate the optical parameters of a target given photoacoustic measurements as a data. Conventionally the problem is split into two steps: 1) the acoustical inverse problem of estimating the acoustic initial pressure distribution from the acoustical time series data; 2) the optical inverse problem of estimating the optical absorption and scattering from the initial pressure distributions. In this work, an approach for estimating the optical absorption and scattering directly from the acoustical time series is investigated with simulations. The work combines a homogeneous acoustical forward model, based on the Green's function solution of the wave equation, and a finite element method based diffusion approximation model of light propagation into a single forward model. This model maps the optical parameters of interest into a time domain signal. The model is used with a Bayesian approach to ill-posed inverse problems to form estimates of the posterior distributions for the parameters of interest. In addition to being able to provide point estimates of the parameters of interest, i.e. reconstruct the absorption and scattering distributions, the approach can be used to derive information on the uncertainty associated with the estimates.
Theodorou, Dimitrios; Meligotsidou, Loukia; Karavoltsos, Sotirios; Burnetas, Apostolos; Dassenakis, Manos; Scoullos, Michael
2011-02-15
The propagation stage of uncertainty evaluation, known as the propagation of distributions, is in most cases approached by the GUM (Guide to the Expression of Uncertainty in Measurement) uncertainty framework which is based on the law of propagation of uncertainty assigned to various input quantities and the characterization of the measurand (output quantity) by a Gaussian or a t-distribution. Recently, a Supplement to the ISO-GUM was prepared by the JCGM (Joint Committee for Guides in Metrology). This Guide gives guidance on propagating probability distributions assigned to various input quantities through a numerical simulation (Monte Carlo Method) and determining a probability distribution for the measurand. In the present work the two approaches were used to estimate the uncertainty of the direct determination of cadmium in water by graphite furnace atomic absorption spectrometry (GFAAS). The expanded uncertainty results (at 95% confidence levels) obtained with the GUM Uncertainty Framework and the Monte Carlo Method at the concentration level of 3.01 μg/L were ±0.20 μg/L and ±0.18 μg/L, respectively. Thus, the GUM Uncertainty Framework slightly overestimates the overall uncertainty by 10%. Even after taking into account additional sources of uncertainty that the GUM Uncertainty Framework considers as negligible, the Monte Carlo gives again the same uncertainty result (±0.18 μg/L). The main source of this difference is the approximation used by the GUM Uncertainty Framework in estimating the standard uncertainty of the calibration curve produced by least squares regression. Although the GUM Uncertainty Framework proves to be adequate in this particular case, generally the Monte Carlo Method has features that avoid the assumptions and the limitations of the GUM Uncertainty Framework. Copyright © 2010 Elsevier B.V. All rights reserved.
Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation
Energy Technology Data Exchange (ETDEWEB)
Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)
2017-05-15
Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.
Valle, Denis
2011-06-01
Biomass is a fundamental measure in the natural sciences, and numerous models have been developed to forecast timber and fishery yields, forest carbon content, and other environmental services that depend on biomass estimates. We derive general results that reveal how dynamic models that simulate growth as an increase in a linear measure of size (e.g., diameter, length, height) result in biased estimates of future mean biomass when uncertainty in growth is misrepresented. Our case study shows how models of tree growth that predict the same mean diameter increment, but with alternative representations of growth uncertainty, result in almost a threefold difference in the projections of future mean tree biomass after a 20-yr simulation. These results have important implications concerning our ability to accurately predict future biomass and all the related environmental services (e.g., forest carbon content, timber and fishery yields). If the objective is to predict future biomass, we strongly recommend that: (1) ecological modelers should choose a growth model based on a variable more linearly related to biomass (e.g., tree basal area instead of tree diameter for forest models); (2) if field measurements preclude the use of variables other than the linear measure of size, both the mean and other statistical moments (e.g., covariances) should be carefully modeled; (3) careful assessment be done on models that aggregate similar individuals (i.e., cohort models) to see if neglecting autocorrelated growth from individuals leads to biased estimates of future mean biomass.
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Computer Aided Molecular Design (CAMD) is an important tool to generate, test and evaluate promising chemical products. CAMD can be used in thermodynamic cycle for the design of pure component or mixture working fluids in order to improve the heat transfer capacity of the system. The safety...... assessment of novel working fluids relies on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. For novel working fluid candidates experimental values are not available for the safety...... analysis. In this case property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible [1]. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence...
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Data and model uncertainties in flood-frequency estimation for an urban Swedish catchment
Westerberg, Ida; Persson, Tony
2013-04-01
Floods are extreme events that occur seldom, which means that there are relatively few data of weather and flow conditions during flooding episodes for characterisation of flood frequency. In addition, there are often practical difficulties associated with the measurement of discharge during floods. In this study we used a combination of monitoring and modelling to overcome the lack of reliable discharge data and be able to characterise the flooding problems in the highly urbanised Riseberga Creek catchment in eastern Malmö, Sweden. The study is part of a project, GreenClimeAdapt, in which local stakeholders and researchers work with finding and demonstrating solutions to the flooding problems in the catchment. A high-resolution acoustic doppler discharge gauge was installed in the creek and a hydrologic model was set up to extend this short record for estimation of flood frequency. Discharge uncertainty was estimated based on a stage-discharge analysis and accounted for in model calibration together with uncertainties in the model parameterisation. The model was first used to study the flow variability during the 16 years with available climate input data. Then it was driven with long-term climate realisations from a statistical weather generator to estimate flood frequency for present climate and for future climate changes through continuous simulation. The uncertainty in the modelled flood-frequency for present climate was found to be important, and could partly be reduced in the future using longer monitoring records containing more and higher flood episodes. The climate change scenarios are mainly useful for sensitivity analysis of different adaptation measures that can be taken to reduce the flooding problems.
Directory of Open Access Journals (Sweden)
Lash Timothy L
2007-11-01
Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a
Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.
2013-01-01
The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.
Estimation of the Fuel Depletion Code Bias and Uncertainty in Burnup-Credit Criticality Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Woon; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Lee, Sang Jin; Bae, Chang Yeal [Nuclear Environment Technology Institute, Taejon (Korea, Republic of)
2006-07-01
In the past, criticality safety analyses for commercial light-water-reactor (LWR) spent nuclear fuel (SNF) storage and transportation canisters assumed the spent fuel to be fresh (unirradiated) fuel with uniform isotopic compositions. This fresh-fuel assumption provides a well-defined, bounding approach to the criticality safety analysis that eliminates concerns related to the fuel operating history, and thus considerably simplifies the safety analysis. However, because this assumption ignores the inherent decrease in reactivity as a result of irradiation, it is very conservative. The concept of taking credit for the reduction in reactivity due to fuel burnup is commonly referred to as burnup credit. Implementation of burnup credit requires the computational prediction of the nuclide inventories (compositions) for the dominant fissile and absorbing nuclide species in spent fuel. In addition to that, the bias and uncertainty in the predicted concentration of all nuclides used in the analysis be established by comparisons of calculated and measured radiochemical assay data. In this paper, three methods for considering the bias and uncertainty will be reviewed. The estimated bias and uncertainty that the results of 3rd method are presented.
Uncertainty estimation with bias-correction for flow series based on rating curve
Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta
2014-03-01
Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.
Directory of Open Access Journals (Sweden)
Tobias Pardowitz
2016-10-01
Full Text Available A simple method is presented designed to assess uncertainties from dynamical downscaling of regional high impact weather. The approach makes use of the fact that the choice of the simulation domain for the regional model is to a certain degree arbitrary. Thus, a small ensemble of equally valid simulations can be produced from the same driving model output by shifting the domain by a few of grid cells. Applying the approach to extra-tropical storm systems the regional simulations differ with respect to the exact location and severity of extreme wind speeds. Based on an integrated storm severity measure, the individual ensemble members are found to vary by more than 25 % from the ensemble mean in the majority of episodes considered. Estimates of insured losses based on individual regional simulations and integrated over Germany even differ by more than 50 % from the ensemble mean in most cases. Based on a set of intense storm episodes, a quantification of winter storm losses under recent and future climate is made. Using this domain shift ensemble approach, uncertainty ranges are derived representing the uncertainty inherent to the used downscaling method.
Estimating Supplies Program: Evaluation Report
2002-12-24
actually treat those patients. (U) It is also important to remember that ESP is estimation software. Keep in mind that the number and variety of...will not be indicative of the supplies needed to actually treat those patients. (U) It is also important to remember that ESP is estimation software
Plant application uncertainty evaluation of LBLOCA analysis using RELAP5/MOD3/KAERI
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Yong; Chung, Bub Dong; Hwang, Tae Suk; Lee, Guy Hyung; Chang, Byung Hoon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1994-06-01
A practical realistic evaluation methodology to evaluate the ECCS performance that satisfies the requirements of the revised ECCS rule has been developed and this report describes the application of new REM to large break LOCA. A computer code RELAP5/MOD3/KAERI, which was improved from RELAP5/ MOD3.1 was used as the best estimated code for the analysis and Kori unit 3 and 4 was selected as the reference plant. Response surfaces for blowdown and reflood PCTs were generated from the results of the sensitivity analyses and probability distribution functions were established by using Monte-Carlo sampler for each response surface. This study shows that plant application uncertainty can be quantified and demonstrates the applicability of the new realistic evaluation methodology. (Author) 29 refs., 40 figs., 8 tabs.
Cost implications of uncertainty in CO2 storage resource estimates: A review
Anderson, Steven T.
2017-01-01
Carbon capture from stationary sources and geologic storage of carbon dioxide (CO2) is an important option to include in strategies to mitigate greenhouse gas emissions. However, the potential costs of commercial-scale CO2 storage are not well constrained, stemming from the inherent uncertainty in storage resource estimates coupled with a lack of detailed estimates of the infrastructure needed to access those resources. Storage resource estimates are highly dependent on storage efficiency values or storage coefficients, which are calculated based on ranges of uncertain geological and physical reservoir parameters. If dynamic factors (such as variability in storage efficiencies, pressure interference, and acceptable injection rates over time), reservoir pressure limitations, boundaries on migration of CO2, consideration of closed or semi-closed saline reservoir systems, and other possible constraints on the technically accessible CO2 storage resource (TASR) are accounted for, it is likely that only a fraction of the TASR could be available without incurring significant additional costs. Although storage resource estimates typically assume that any issues with pressure buildup due to CO2 injection will be mitigated by reservoir pressure management, estimates of the costs of CO2 storage generally do not include the costs of active pressure management. Production of saline waters (brines) could be essential to increasing the dynamic storage capacity of most reservoirs, but including the costs of this critical method of reservoir pressure management could increase current estimates of the costs of CO2 storage by two times, or more. Even without considering the implications for reservoir pressure management, geologic uncertainty can significantly impact CO2 storage capacities and costs, and contribute to uncertainty in carbon capture and storage (CCS) systems. Given the current state of available information and the scarcity of (data from) long-term commercial-scale CO2
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Golsteijn, Laura; van Zelm, Rosalie; Hendriks, A Jan; Huijbregts, Mark A J
2013-09-01
Since chemicals' ecotoxic effects depend for most soil species on the dissolved concentration in pore water, the equilibrium partitioning (EP) method is generally used to estimate hazardous concentrations (HC50) in the soil from aquatic toxicity tests. The present study analyzes the statistical uncertainty in terrestrial HC50s derived by the EP-method. For 47 organic chemicals, we compared freshwater HC50s derived from standard aquatic ecotoxicity tests with porewater HC50s derived from terrestrial ecotoxicity tests. Statistical uncertainty in the HC50s due to limited species sample size and in organic carbon-water partitioning coefficients due to predictive error was treated with probability distributions propagated by Monte Carlo simulations. Particularly for specifically acting chemicals, it is very important to base the HC50 on a representative sample of species, composed of both target and non-target species. For most chemical groups, porewater HC50 values were approximately a factor of 3 higher than freshwater HC50 values. The ratio of the porewater HC50/freshwater HC50 was typically 3.0 for narcotic chemicals (2.8 for nonpolar and 3.4 for polar narcotics), 0.8 for reactive chemicals, 2.9 for neurotoxic chemicals (4.3 for AChE agents and 0.1 for the cyclodiene type), and 2.5 for herbicides-fungicides. However, the statistical uncertainty associated with this ratio was large (typically 2.3 orders of magnitude). For 81% of the organic chemicals studied, there was no statistical difference between the hazardous concentration of aquatic and terrestrial species. We conclude that possible systematic deviations between the HC50s of aquatic and terrestrial species appear to be less prominent than the overall statistical uncertainty. Copyright © 2013 Elsevier Ltd. All rights reserved.
Inferring the uncertainty of satellite precipitation estimates in data-sparse regions over land
Bytheway, Janice L.; Kummerow, Christian D.
2013-09-01
global distribution of precipitation is essential to understanding earth's water and energy budgets. While developed countries often have reliable precipitation observation networks, our understanding of the distribution of precipitation in data-sparse regions relies on sporadic rain gauges and information gathered by spaceborne sensors. Several multisensor data sets attempt to represent the global distribution of precipitation on subdaily time scales by combining multiple satellite and ground-based observations. Due to limited validation sources and highly variable nature of precipitation, it is difficult to assess the performance of multisensor precipitation products globally. Here, we introduce a methodology to infer the uncertainty of satellite precipitation measurements globally based on similarities between precipitation characteristics in data-sparse and data-rich regions. Five generalized global rainfall regimes are determined based on the probability distribution of 3-hourly accumulated rainfall in 0.25° grid boxes using the Tropical Rainfall Measurement Mission 3B42 product. Uncertainty characteristics for each regime are determined over the United States using the high-quality National Centers for Environmental Prediction Stage IV radar product. The results indicate that the frequency of occurrence of zero and little accumulated rainfall is the key difference between the regimes and that differences in error characteristics are most prevalent at accumulations below ~4 mm/h. At higher accumulations, uncertainty in 3-hourly accumulation converges to ~80%. Using the self-similarity in the five rainfall regimes along with the error characteristics observed for each regime, the uncertainty in 3-hourly precipitation estimates can be inferred in regions that lack quality ground validation sources.
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard
2015-02-01
The European Union regulation for blood establishments does not require the evaluation of measurement uncertainty in virology screening tests, which is required by ISO 15189 guideline following GUM principles. GUM modular approaches have been discussed by medical laboratory researchers but no consensus has been achieved regarding practical application. Meanwhile, the application of empirical approaches fulfilling GUM principles has gained support. Blood establishments' screening tests accredited by ISO 15189 need to select an appropriate model even GUM models are intended uniquely for quantitative examination procedures. Alternative (to GUM) models focused on probability have been proposed in medical laboratories' diagnostic tests. This article reviews, discusses and proposes models for diagnostic accuracy in blood establishments' screening tests. The output of these models is an alternative to VIM's measurement uncertainty concept. Example applications are provided for an anti-HCV test where calculations were performed using a commercial spreadsheet. The results show that these models satisfy ISO 15189 principles and that the estimation of clinical sensitivity, clinical specificity, binary results agreement and area under the ROC curve are alternatives to the measurement uncertainty concept. Copyright © 2014. Published by Elsevier Ltd.
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability
Evaluation of bayesian tensor estimation using tensor coherence.
Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong
2009-06-21
Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.
Evaluation of Bayesian tensor estimation using tensor coherence
Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong
2009-06-01
Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.
Evaluation of the uncertainties associated to the in vivo monitoring of iodine-131 in the thyroid
Energy Technology Data Exchange (ETDEWEB)
Gontijo, Rodrigo Modesto Gadelha; Lucena, Eder Augusto; Dantas, Ana Leticia A.; Dantas, Bernardo Maranhao [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2011-07-01
The internal dose from the incorporation of radionuclides by humans can be estimated by in vivo direct measurements in the human body and in vitro analysis of biological indicators. In vivo techniques consist on the identification and quantification of radionuclides present in the whole body and in specific organs and tissues. The results obtained in measurements may present small uncertainties which are within pre-set limits in monitoring programs for occupationally exposed individuals. This study aims to evaluate the sources of uncertainty associated with the results of in vivo monitoring of iodine 131 in the thyroid. The benchmarks adopted in this study are based on the criteria suggested by the General Guide for Estimating Effective Doses from Monitoring Data (Project IDEAS/European Community). The reference values used were the ones for high-energy photons (>100 keV). Besides the parameters suggested by the IDEAS Guide, it has also been evaluated the fluctuation of the counting due to the phantom repositioning, which represents the reproducibility of the counting geometry. Measurements were performed at the Whole Body Counter Unit of the IRD using a scintillation detector NaI (Tl) 3'' x 3'' and a neck-thyroid phantom developed at the In Vivo Monitoring Laboratory of IRD. This phantom contains a standard source of barium-133 added to a piece of filter paper with the dimension and shape of a thyroid gland. Scattering factors were calculated and compared in different counting geometries. The results show that the technique studied presents reproducibility equivalent to the values suggested in the IDEAS Guide and measurement uncertainties compatible to international quality standards for this type of in vivo monitoring. (author)
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Uncertainties in Decadal Model Evaluation due to the Choice of Different Reanalysis Products
Illing, Sebastian; Kadow, Christopher; Kunst, Oliver; Cubasch, Ulrich
2014-05-01
In recent years decadal predictions have become very popular in the climate science community. A major task is the evaluation and validation of a decadal prediction system. Therefore hindcast experiments are performed and evaluated against observation based or reanalysis data-sets. That is, various metrics and skill scores like the anomaly correlation or the mean squared error skill score (MSSS) are calculated to estimate potential prediction skill of the model system. Our results will mostly feature the Baseline 1 hindcast experiments from the MiKlip decadal prediction system. MiKlip (www.fona-miklip.de) is a project for medium-term climate prediction funded by the Federal Ministry of Education and Research in Germany (BMBF) and has the aim to create a model system that can provide reliable decadal forecasts on climate and weather. There are various reanalysis and observation based products covering at least the last forty years which can be used for model evaluation, for instance the 20th Century Reanalysis from NOAA-CIRES, the Climate Forecast System Reanalysis from NCEP or the Interim Reanalysis from ECMWF. Each of them is based on different climate models and observations. We will show that the choice of the reanalysis product has a huge impact on the value of various skill metrics. In some cases this may actually lead to a change in the interpretation of the results, e.g. when one tries to compare two model versions and the anomaly correlation difference changes its sign for two different reanalysis products. We will also show first results of our studies investigating the influence and effect of this source of uncertainty for decadal model evaluation. Furthermore we point out regions which are most affected by this uncertainty and where one has to cautious interpreting skill scores. In addition we introduce some strategies to overcome or at least reduce this source of uncertainty.
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-03-01
This study addresses estimation of net irrigation requirement over a growing season under climate uncertainty. An ecohydrological model, building upon the stochastic differential equation of soil moisture dynamics, is employed as a basis to derive new analytical expressions for estimating seasonal net irrigation requirement probabilistically. Two distinct irrigation technologies are considered. For micro irrigation technology, probability density function of seasonal net irrigation depth (SNID) is derived assessing transient behavior of a stochastic process which is time integral of dichotomous Markov process. Probability mass function of SNID which is a discrete random variable for traditional irrigation technology is also presented using a marked renewal process with quasi-exponentially-distributed time intervals. Comparing the results obtained from the presented models with those resulted from a Monte Carlo approach verified the significance of the probabilistic expressions derived and assumptions made.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Duchêne, Sebastian; Lanfear, Robert
2015-09-01
Ancestral state reconstruction (ASR) is a popular method for exploring the evolutionary history of traits that leave little or no trace in the fossil record. For example, it has been used to test hypotheses about the number of evolutionary origins of key life-history traits such as oviparity, or key morphological structures such as wings. Many studies that use ASR have suggested that the number of evolutionary origins of such traits is higher than was previously thought. The scope of such inferences is increasing rapidly, facilitated by the construction of very large phylogenies and life-history databases. In this paper, we use simulations to show that the number of evolutionary origins of a trait tends to be overestimated when the phylogeny is not perfect. In some cases, the estimated number of transitions can be several fold higher than the true value. Furthermore, we show that the bias is not always corrected by standard approaches to account for phylogenetic uncertainty, such as repeating the analysis on a large collection of possible trees. These findings have important implications for studies that seek to estimate the number of origins of a trait, particularly those that use large phylogenies that are associated with considerable uncertainty. We discuss the implications of this bias, and methods to ameliorate it. © 2015 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
On the representation and estimation of spatial uncertainty. [for mobile robot
Smith, Randall C.; Cheeseman, Peter
1987-01-01
This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arising from diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those from an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, if not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.
Evaluating variability and uncertainty in radiological impact assessment using SYMBIOSE.
Simon-Cornu, M; Beaugelin-Seiller, K; Boyer, P; Calmon, P; Garcia-Sanchez, L; Mourlon, C; Nicoulaud, V; Sy, M; Gonze, M A
2015-01-01
SYMBIOSE is a modelling platform that accounts for variability and uncertainty in radiological impact assessments, when simulating the environmental fate of radionuclides and assessing doses to human populations. The default database of SYMBIOSE is partly based on parameter values that are summarized within International Atomic Energy Agency (IAEA) documents. To characterize uncertainty on the transfer parameters, 331 Probability Distribution Functions (PDFs) were defined from the summary statistics provided within the IAEA documents (i.e. sample size, minimal and maximum values, arithmetic and geometric means, standard and geometric standard deviations) and are made available as spreadsheet files. The methods used to derive the PDFs without complete data sets, but merely the summary statistics, are presented. Then, a simple case-study illustrates the use of the database in a second-order Monte Carlo calculation, separating parametric uncertainty and inter-individual variability. Copyright © 2014 Elsevier Ltd. All rights reserved.
Systematic Evaluation of Uncertainty in Material Flow Analysis
DEFF Research Database (Denmark)
Laner, David; Rechberger, Helmut; Astrup, Thomas Fruergaard
2014-01-01
Material flow analysis (MFA) is a tool to investigate material flows and stocks in defined systems as a basis for resource management or environmental pollution control. Because of the diverse nature of sources and the varying quality and availability of data, MFA results are inherently uncertain...... in MFA. Based on this, recommendations for consideration of uncertainty in MFA are provided. A five-step framework for uncertainty handling is outlined, reflecting aspects such as data quality and goal/scope of the MFA. We distinguish between descriptive (quantification of material turnover in a region...... for exploratory MFAs. Irrespective of the level of sophistication, lack of information about MFA data poses a major challenge for meaningful uncertainty analysis. The step-wise framework suggested here provides a systematic way to consider available information and produce results as precise as the data warrant....
A Quantitative Measure For Evaluating Project Uncertainty Under Variation And Risk Effects
Directory of Open Access Journals (Sweden)
A. Chenarani
2017-10-01
Full Text Available The effects of uncertainty on a project and the risk event as the consequence of uncertainty are analyzed. The uncertainty index is proposed as a quantitative measure for evaluating the uncertainty of a project. This is done by employing entropy as the indicator of system disorder and lack of information. By employing this index, the uncertainty of each activity and its increase due to risk effects as well as project uncertainty changes as a function of time can be assessed. The results are implemented and analyzed for a small turbojet engine development project as the case study. The results of this study can be useful for project managers and other stakeholders for selecting the most effective risk management and uncertainty controlling method.
Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo
2015-10-01
Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper.
On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo
2016-02-08
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The
The Intolerance of Uncertainty Scale for Children: A Psychometric Evaluation
Comer, Jonathan S.; Roy, Amy K.; Furr, Jami M.; Gotimer, Kristin; Beidas, Rinad S.; Dugas, Michel J.; Kendall, Philip C.
2009-01-01
Intolerance of uncertainty (IU) has contributed to our understanding of excessive worry and adult anxiety disorders, but there is a paucity of research on IU in child samples. This gap is due to the absence of a psychometrically sound measure of IU in youth. The present study adapted parallel child- and parent-report forms of the Intolerance of…
Yuan, Fei; Zhao, Chongxu; Jiang, Yong; Ren, Liliang; Shan, Hongcui; Zhang, Limin; Zhu, Yonghua; Chen, Tao; Jiang, Shanhu; Yang, Xiaoli; Shen, Hongren
2017-11-01
Projections of hydrological changes are associated with large uncertainties from different sources, which should be quantified for an effective implementation of water management policies adaptive to future climate change. In this study, a modeling chain framework to project future hydrological changes and the associated uncertainties in the Xijiang River basin, South China, was established. The framework consists of three emission scenarios (ESs), four climate models (CMs), four statistical downscaling (SD) methods, four hydrological modeling (HM) schemes, and four probability distributions (PDs) for extreme flow frequency analyses. Direct variance method was adopted to analyze the manner by which uncertainty sources such as ES, CM, SD, and HM affect the estimates of future evapotranspiration (ET) and streamflow, and to quantify the uncertainties of PDs in future flood and drought risk assessment. Results show that ES is one of the least important uncertainty sources in most situations. CM, in general, is the dominant uncertainty source for the projections of monthly ET and monthly streamflow during most of the annual cycle, daily streamflow below the 99.6% quantile level, and extreme low flow. SD is the most predominant uncertainty source in the projections of extreme high flow, and has a considerable percentage of uncertainty contribution in monthly streamflow projections in July-September. The effects of SD in other cases are negligible. HM is a non-ignorable uncertainty source that has the potential to produce much larger uncertainties for the projections of low flow and ET in warm and wet seasons than for the projections of high flow. PD contributes a larger percentage of uncertainty in extreme flood projections than it does in extreme low flow estimates. Despite the large uncertainties in hydrological projections, this work found that future extreme low flow would undergo a considerable reduction, and a noticeable increase in drought risk in the Xijiang
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Directory of Open Access Journals (Sweden)
Razi Ahmed
2013-06-01
Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-07
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.
Energy Technology Data Exchange (ETDEWEB)
Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Mackenzie, Alistair; Eales, Timothy D; Dunn, Hannah L; Yip Braidley, Mary; Dance, David R; Young, Kenneth C
2017-07-01
To demonstrate a method of simulating mammography images of the CDMAM phantom and to investigate the coefficient of variation (CoV) in the threshold gold thickness (t T ) measurements associated with use of the phantom. The noise and sharpness of Hologic Dimensions and GE Essential mammography systems were characterized to provide data for the simulation. The simulation method was validated by comparing the t T results of real and simulated images of the CDMAM phantom for three different doses and the two systems. The detection matrices produced from each of 64 images using CDCOM software were randomly resampled to create 512 sets of 8, 16 and 32 images to estimate the CoV of t T . Sets of simulated images for a range of doses were used to estimate the CoVs for a range of diameters and threshold thicknesses. No significant differences were found for t T or the CoV between real and simulated CDMAM images. It was shown that resampling from 256 images was required for estimating the CoV. The CoV was around 4% using 16 images for most of the phantom but is over double that for details near the edge of the phantom. We have demonstrated a method to simulate images of the CDMAM phantom for different systems at a range of doses. We provide data for calculating uncertainties in t T . Any future review of the European guidelines should take into consideration the calculated uncertainties for the 0.1mm detail. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.
2016-10-01
The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.
Assessing uncertainties in GHG emission estimates from Canada's oil sands developments
Kim, M. G.; Lin, J. C.; Huang, L.; Edwards, T. W.; Worthy, D.; Wang, D. K.; Sweeney, C.; White, J. W.; Andrews, A. E.; Bruhwiler, L.; Oda, T.; Deng, F.
2013-12-01
Reducing uncertainties in projections of surface emissions of CO2 and CH4 relies on continuously improving our scientific understanding of the exchange processes between the atmosphere and land at regional scales. In order to enhance our understanding in emission processes and atmospheric transports, an integrated framework that addresses individual natural and anthropogenic factors in a complementary way proves to be invaluable. This study presents an example of top-down inverse modeling that utilizes high precision measurement data collected at a Canadian greenhouse gas monitoring site. The measurements include multiple tracers encompassing standard greenhouse gas species, stable isotopes of CO2, and combustion-related species. The potential for the proposed analysis framework is demonstrated using Stochastic Time-Inverted Lagrangian Transport (STILT) model runs to yield a unique regional-scale constraint that can be used to relate the observed changes of tracer concentrations to the processes in their upwind source regions. The uncertainties in emission estimates are assessed using different transport fields and background concentrations coupled with the STILT model. Also, methods to further reduce uncertainties in the retrieved emissions by incorporating additional constraints including tracer-to-tracer correlations and satellite measurements are briefly discussed. The inversion approach both reproduces source areas in a spatially explicit way through sophisticated Lagrangian transport modeling and infers emission processes that leave imprints on atmospheric tracers. The results indicate that the changes in greenhouse gas concentration are strongly influenced by regional sources, including significant contributions from fossil fuel emissions, and that the integrated approach can be used for regulatory regimes to verify reported emissions of the greenhouse gas from oil sands developments.
Anbumani, Surega; Arunai Nambi Raj, N; S Prabhakar, Girish; Anchineyan, Pichandi; Bilimagga, Ramesh S; Palled, Siddanna R; Chairmadhurai, Arun
2014-01-01
In Intensity Modulated Radiation Therapy (IMRT) dose distributions tend to be more complex and heterogeneous because of the modulated fluences in each beamlet of every single beam. These dose-volume (DV) parameters derived from the dose volume histogram (DVH) are physical quantities, thought to correlate with the biological response of the tissues. The aim of this study was to quantify the uncertainty of physical dose metrics to predict clinical outcomes of radiotherapy. The radiobiological estimates such as tumor control probability (TCP) and Normal Tissue Complication Probability (NTCP) were made for a cohort of 40 cancer patients (10 brain;19 head & neck;11 cervix) using the DV parameters. Statistical analysis was performed to determine the correlation of physical plan quality indicators with radiobiological estimates. The correlation between conformity index (CI) and TCP was found to be good and the dosimetric parameters for optic nerves, optic chiasm, brain stem, normal brain and parotids correlated well with the NTCP estimates. A follow up study (median duration 18 months) was also performed. There was no grade 3 or 4 normal tissue complications observed. Local tumor control was found to be higher in brain (90%) and pelvic cases (95%), whereas a decline of 70% was noted with head & neck cancer cases. The equivalent uniform dose (EUD) concept of radiobiological model used in the software determines TCP and NTCP values which can predict outcomes precisely using DV data in the voxel level. The uncertainty of using physical dose metrics for plan evaluation is quantified with the statistical analysis. Radiobiological evaluation is helpful in ranking the rival treatment plans also.
Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo
2016-04-01
We present a probabilistic framework for assessing human health risk due to groundwater contamination. Our goal is to quantify how physical hydrogeological and biochemical parameters control the magnitude and uncertainty of human health risk. Our methodology captures the whole risk chain from the aquifer contamination to the tap water assumption by human population. The contaminant concentration, the key parameter for the risk estimation, is governed by the interplay between the large-scale advection, caused by heterogeneity and the degradation processes strictly related to the local scale dispersion processes. The core of the hazard identification and of the methodology is the reactive transport model: erratic displacement of contaminant in groundwater, due to the spatial variability of hydraulic conductivity (K), is characterized by a first-order Lagrangian stochastic model; different dynamics are considered as possible ways of biodegradation in aerobic and anaerobic conditions. With the goal of quantifying uncertainty, the Beta distribution is assumed for the concentration probability density function (pdf) model, while different levels of approximation are explored for the estimation of the one-point concentration moments. The information pertaining the flow and transport is connected with a proper dose response assessment which generally involves the estimation of physiological parameters of the exposed population. Human health response depends on the exposed individual metabolism (e.g. variability) and is subject to uncertainty. Therefore, the health parameters are intrinsically a stochastic. As a consequence, we provide an integrated in a global probabilistic human health risk framework which allows the propagation of the uncertainty from multiple sources. The final result, the health risk pdf, is expressed as function of a few relevant, physically-based parameters such as the size of the injection area, the Péclet number, the K structure metrics and
[Evaluation of measurement uncertainty of welding fume in welding workplace of a shipyard].
Ren, Jie; Wang, Yanrang
2015-12-01
To evaluate the measurement uncertainty of welding fume in the air of the welding workplace of a shipyard, and to provide quality assurance for measurement. According to GBZ/T 192.1-2007 "Determination of dust in the air of workplace-Part 1: Total dust concentration" and JJF 1059-1999 "Evaluation and expression of measurement uncertainty", the uncertainty for determination of welding fume was evaluated and the measurement results were completely described. The concentration of welding fume was 3.3 mg/m(3), and the expanded uncertainty was 0.24 mg/m(3). The repeatability for determination of dust concentration introduced an uncertainty of 1.9%, the measurement using electronic balance introduced a standard uncertainty of 0.3%, and the measurement of sample quality introduced a standard uncertainty of 3.2%. During the determination of welding fume, the standard uncertainty introduced by the measurement of sample quality is the dominant uncertainty. In the process of sampling and measurement, quality control should be focused on the collection efficiency of dust, air humidity, sample volume, and measuring instruments.
Directory of Open Access Journals (Sweden)
M. P. Mittermaier
2008-05-01
Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.
The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Directory of Open Access Journals (Sweden)
Dantas C.C.
2013-01-01
Full Text Available The solid flow in air-catalyst in circulating fluidized bed was simulated with CFD model to obtain axial and radial distribution. Therefore, project parameters were confirmed and steady state operation condition was improved. Solid holds up axial end radial profiles simulation and comparison with gamma transmission measurements are in a good agreement. The transmission signal from an 241Am radioactive source was evaluated in NaI(Tl detector coupled to multichannel analyzer. This non intrusive measuring set up is installed at riser of a cold pilot unit to determine parameters of FCC catalyst flow at several concentrations. Mass flow rate calculated by combining solid hold up and solid phase velocity measurements was compared with catalyst inlet measured at down-comer. Evaluation in each measured parameter shows that a relative combined uncertainty of 6% in a 95% interval was estimated. Uncertainty analysis took into account a significant correlation in scan riser transmission measurements. An Eulerian approach of CFD model incorporating the kinetic theory of granular flow was adopted to describe the gas–solid two-phase flows in a multizone circulating reactor. Instantaneous and local gas-particle velocity, void fraction and turbulent parameters were obtained and results are shown in 2 D and 3D graphics.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Cook, Bruce Douglas
NASA satellites Terra and Aqua orbit the Earth every 100 minutes and collect data that is used to compute an 8 day time series of gross photosynthesis and annual plant production for each square kilometer of the earth's surface. This is a remarkable technological and scientific achievement that permits continuous monitoring of plant production and quantification of CO2 fixed by the terrestrial biosphere. It also allows natural resource scientists and practitioners to identify global trends associated with land cover/use and climate change. Satellite-derived estimates of photosynthesis and plant production from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) generally agree with independent measurements from validation sites across the globe, but local biases and spatial uncertainties exist at the regional scale. This dissertation evaluates three sources of uncertainty associated with MODIS algorithms in the Great Lakes Region, and evaluates LiDAR (Light Detection and Ranging) remote sensing as a method for improving model inputs. Chapter 1 examines the robustness of model parameters and errors resulting from canopy disturbances, which were assessed by inversion of flux tower observations during a severe outbreak of forest tent caterpillars. Chapter 2 examines model logic errors in wetland ecosystems, focusing on surface water table fluctuations as a potential constraint to photosynthesis that is not accounted for in the MODIS algorithm. Chapter 3 examines errors associated with pixel size and poor state data, using fine spatial resolution LiDAR and multispectral satellite data to derive estimates plant production across a heterogeneous landscape in northern Wisconsin. Together, these papers indicate that light- and carbon-use efficiency models driven by remote sensing and surface meteorology data are capable of providing accurate estimates of plant production within stands and across landscapes of the Great Lakes Region. It is demonstrated that model
Directory of Open Access Journals (Sweden)
Maria Isabel Neria-Gonzále
2015-04-01
Full Text Available The main goal of this work is presents an alternative design of a class of nonlinear controller for tracking trajectories in a class of continuous bioreactor. It is assumed that the reaction rate of the controlled variable is unknown, therefore an uncertainty estimator is proposed to infer this important term, and the observer is coupled with a class of nonlinear feedback. The considered controller contains a class of continuous sigmoid feedback in order to provide smooth closed-loop response of the considered bioreactor. A kinetic model of a sulfate-reducing system is experimentally corroborated and is employed as a benchmark for further modeling and simulation of the continuous operation. A linear PI controller, a class of sliding-mode controller and the proposed one are compared and it is show that the proposed controller yields the best performance. The closed-loop behavior of the process is analyzed via numerical experiments.
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik
2008-01-01
within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety...... that require significant computational time to run and produce the desired output. In this paper we improve the computational efficiency of GLUE by sampling the prior parameter space using an adaptive Markov Chain Monte Carlo scheme (the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm). Moreover, we......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...
DEFF Research Database (Denmark)
Wang, Weizhi; Wu, Minghao; Palm, Johannes
2018-01-01
The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... mathematical models such as computational fluid dynamics are preferred and over the last 5 years, computational fluid dynamics has become more frequently used in the wave energy field. However, rigorous estimation of numerical errors, convergence rates and uncertainties associated with computational fluid...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...
Directory of Open Access Journals (Sweden)
Igor Stubelj
2014-03-01
Full Text Available The paper deals with the estimation of weighted average cost of capital (WACC for regulated industries in developing financial markets from the perspective of the current financial-economic crisis. In current financial market situation some evident changes have occurred: risk-free rates in solid and developed financial markets (e. g. USA, Germany have fallen, but due to increased market volatility, the risk premiums have increased. The latter is especially evident in transition economies where the amplitude of market volatility is extremely high. In such circumstances, there is a question of how to calculate WACC properly. WACC is an important measure in financial management decisions and in our case, business regulation. We argue in the paper that the most accurate method for calculating WACC is the estimation of the long-term WACC, which takes into consideration a long-term stable yield of capital and not the current market conditions. Following this, we propose some solutions that could be used for calculating WACC for regulated industries on the developing financial markets in times of market uncertainty. As an example, we present an estimation of the capital cost for a selected Slovenian company, which operates in the regulated industry of electric distribution.
Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM
Li, Hongli; Chen, Xiaohuai; Cheng, Yinbao; Liu, Houde; Wang, Hanbin; Cheng, Zhenying; Wang, Hongtao
2017-10-01
Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru [Fuel Cell System Group, Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-}0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within {+-}1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably. (author)
Energy Technology Data Exchange (ETDEWEB)
Yohei Tanaka; Akihiko Momma; Ken Kato; Akira Negishi; Kiyonami Takano; Ken Nozaki; Tohru Kato [National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki (Japan). Fuel Cell System Group, Energy Technology Research Institute
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-} 0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within 1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably.
Nakayachi, Kazuya; B Johnson, Branden; Koketsu, Kazuki
2017-08-29
We test here the risk communication proposition that explicit expert acknowledgment of uncertainty in risk estimates can enhance trust and other reactions. We manipulated such a scientific uncertainty message, accompanied by probabilities (20%, 70%, implicit ["will occur"] 100%) and time periods (10 or 30 years) in major (≥magnitude 8) earthquake risk estimates to test potential effects on residents potentially affected by seismic activity on the San Andreas fault in the San Francisco Bay Area (n = 750). The uncertainty acknowledgment increased belief that these specific experts were more honest and open, and led to statistically (but not substantively) significant increases in trust in seismic experts generally only for the 20% probability (vs. certainty) and shorter versus longer time period. The acknowledgment did not change judged risk, preparedness intentions, or mitigation policy support. Probability effects independent of the explicit admission of expert uncertainty were also insignificant except for judged risk, which rose or fell slightly depending upon the measure of judged risk used. Overall, both qualitative expressions of uncertainty and quantitative probabilities had limited effects on public reaction. These results imply that both theoretical arguments for positive effects, and practitioners' potential concerns for negative effects, of uncertainty expression may have been overblown. There may be good reasons to still acknowledge experts' uncertainties, but those merit separate justification and their own empirical tests. © 2017 Society for Risk Analysis.
Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck
2010-05-01
The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.
Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis
2017-04-01
Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, N.; Cabellos, O. [Madrid Polytechnic Univ., Dept. of Nuclear Engineering (Spain); Cabellos, O.; Sanz, J. [Madrid Polytechnic Univ., 2 Instituto de Fusion Nuclear (Spain); Sanz, J. [Univ. Nacional Educacion a Distancia, Dept. of Power Engineering, Madrid (Spain)
2005-07-01
We present a new code system which combines the Monte Carlo neutron transport code MCNP-4C and the inventory code ACAB as a suitable tool for high burnup calculations. Our main goal is to show that the system, by means of ACAB capabilities, enables us to assess the impact of neutron cross section uncertainties on the inventory and other inventory-related responses in high burnup applications. The potential impact of nuclear data uncertainties on some response parameters may be large, but only very few codes exist which can treat this effect. In fact, some of the most reported effective code systems in dealing with high burnup problems, such as CASMO-4, MCODE and MONTEBURNS, lack this capability. As first step, the potential of our system, ruling out the uncertainty capability, has been compared with that of those code systems, using a well referenced high burnup pin-cell benchmark exercise. It is proved that the inclusion of ACAB in the system allows to obtain results at least as reliable as those obtained using other inventory codes, such as ORIGEN2. Later on, the uncertainty analysis methodology implemented in ACAB, including both the sensitivity-uncertainty method and the uncertainty analysis by the Monte Carlo technique, is applied to this benchmark problem. We estimate the errors due to activation cross section uncertainties in the prediction of the isotopic content up to the high-burnup spent fuel regime. The most relevant uncertainties are remarked, and some of the most contributing cross sections to those uncertainties are identified. For instance, the most critical reaction for Am{sup 242m} is Am{sup 241}(n,{gamma}-m). At 100 MWd/kg, the cross-section uncertainty of this reaction induces an error of 6.63% on the Am{sup 242m} concentration.The uncertainties in the inventory of fission products reach up to 30%.
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Directory of Open Access Journals (Sweden)
Yunpeng Song
2015-03-01
Full Text Available Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.
D'Agostino, G.; Mana, G.; Oddone, M.; Prata, M.; Bergamaschi, L.; Giordani, L.
2014-06-01
We investigated the use of neutron activation to estimate the 30Si mole fraction of the ultra-pure silicon material highly enriched in 28Si for the measurement of the Avogadro constant. Specifically, we developed a relative method based on instrumental neutron activation analysis and using a natural-Si sample as a standard. To evaluate the achievable uncertainty, we irradiated a 6 g sample of a natural-Si material and modelled experimentally the signal that would be produced by a sample of the 28Si-enriched material of similar mass and subjected to the same measurement conditions. The extrapolation of the expected uncertainty from the experimental data indicates that a measurement of the 30Si mole fraction of the 28Si-enriched material might reach a 4% relative combined standard uncertainty.
Xiang, Yu; Zhonghua, Su; Jinhua, Leng; Teng, Yun
2017-08-01
A high temperature tensile experiment of modified random copolymerized polypropylene was carried out by ASTM D 638-2014. It analyzed the factors influencing the accuracy of the high temperature mechanical properties of modified random copolymer polypropylene and discussed the causes of the uncer-tainty of measurement standards from the sample size measurement, the indication error of force value of experiment machines, its calibration, data acquisition of the experimental software, the temperature control, the numerical correction, and the material nonuniformity, etc. According to JJF 1059.1-2012, class A and class B evaluation were conducted on the above-mentioned uncertainty components, and all the uncertainty components were synthesized. By analyzing the uncertainty of the measurement results, this paper provides a reference for evaluating the uncertainty of the same type of measurement results.
Álvarez-Valado, V.; González Jorge, H.; Dorrío, B. V.; Yebra, F. J.; Valencia, J. L.; Rodríguez, J.
2008-11-01
Gauge block calibration for dimensional metrology standards are made by means of laser interferometry procedures and uncertainty is typically calculated using the recommendations of the Guide to the Expression of Uncertainty in Measurement (GUM). This method does not appear very useful for complicated and non-linear model equations as occurs in gauge block interferometry. Under this context, a Monte Carlo method is applied to evaluate the uncertainty of the model function. Input distributions are generated taking into account the value of the physical magnitudes, type of probability distribution and standard uncertainty value. These values are particularized for the room conditions and instruments used in the Metrology Laboratory of Galicia (LOMG), Spain. Obtained results show that the output data fit correctly to a normal distribution (Central Limit Theorems hypothesis can be assumed) and show the potentiality of applying Monte Carlo methods to the uncertainty evaluation in gauge block interferometry.
An Evaluation of Depletion Bias and Bias Uncertainty of the GBC cask with PLUS7 Fuels
Energy Technology Data Exchange (ETDEWEB)
Yun, Hyungju; Park, Kwangheon; Hong, Ser Gi [Kyung Hee Univ., Yongin (Korea, Republic of)
2016-10-15
A nuclear criticality safety evaluation that applies burnup credit (BUC) to a DSC is performed mainly through a two-step process: (1) the determination of isotopic compositions within UNFs to be loaded into a DSC by a depletion analysis and (2) the determination of the k{sub eff} value with respect to the DSC by a criticality analysis. In particular, the isotopic compositions by a depletion analysis should be estimated accurately because the concentrations of the nuclides contained in a UNF have a significant influence on the accuracies of depletion analysis and its subsequent criticality analysis. However, since no depletion computer code can calculate exactly nuclide compositions contained in a used nuclear fuel assembly (UNFA), it requires bias and bias uncertainty in terms of a reactivity difference, Δk{sub eff}, by a depletion code for burnup credit criticality safety analyses. In this work, the bias and bias uncertainty in k{sub eff} resulting from biases and bias uncertainties in the calculated nuclide concentrations were determined for the GBC-32 DSC system with 32 PLUS7 16X16 UNFAs. First, the new one-group cross section libraries of the ORIGEN code were generated with respect to the PLUS7 16X16 NFA using the SCALE 6.1/TRITON code. Second, the appropriate initial enrichment values for which the k{sub eff}-REF value of the DSC system was to be 0.94 were searched as a function of specific burnup using the SCALE 6.1/STARBUCS code.
New Measurement Method and Uncertainty Estimation for Plate Dimensions and Surface Quality
Directory of Open Access Journals (Sweden)
Salah H. R. Ali
2013-01-01
Full Text Available Dimensional and surface quality for plate production control is facing difficult engineering challenges. One of these challenges is that plates in large-scale mass production contain geometric uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007, and TIS 2398-2: 2008. A proposed measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of new method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results for centre of curvature (CC, centre of edge (CE, warpage (W, and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results not only showed better accurate new method but also improved the quality of plate products significantly.
Souverijns, N.; Gossart, A.; Lhermitte, S.; Gorodetskaya, I. V.; Kneifel, S.; Maahn, M.; Bliven, F. L.; van Lipzig, N. P. M.
2017-11-01
Snowfall rate (SR) estimates over Antarctica are sparse and characterised by large uncertainties. Yet, observations by precipitation radar offer the potential to get better insight in Antarctic SR. Relations between radar reflectivity (Ze) and snowfall rate (Ze-SR relations) are however not available over Antarctica. Here, we analyse observations from the first Micro Rain Radar (MRR) in Antarctica together with an optical disdrometer (Precipitation Imaging Package; PIP), deployed at the Princess Elisabeth station. The relation Ze = A*SRB was derived using PIP observations and its uncertainty was quantified using a bootstrapping approach, randomly sampling within the range of uncertainty. This uncertainty was used to assess the uncertainty in snowfall rates derived by the MRR. We find a value of A = 18 [11-43] and B = 1.10 [0.97-1.17]. The uncertainty on snowfall rates of the MRR based on the Ze-SR relation are limited to 40%, due to the propagation of uncertainty in both Ze as well as SR, resulting in some compensation. The prefactor (A) of the Ze-SR relation is sensitive to the median diameter of the snow particles. Larger particles, typically found closer to the coast, lead to an increase of the value of the prefactor (A = 44). Smaller particles, typical of more inland locations, obtain lower values for the prefactor (A = 7). The exponent (B) of the Ze-SR relation is insensitive to the median diameter of the snow particles. In contrast with previous studies for various locations, shape uncertainty is not the main source of uncertainty of the Ze-SR relation. Parameter uncertainty is found to be the most dominant term, mainly driven by the uncertainty in mass-size relation of different snow particles. Uncertainties on the snow particle size distribution are negligible in this study as they are directly measured. Future research aiming at reducing the uncertainty of Ze-SR relations should therefore focus on obtaining reliable estimates of the mass-size relations of
Evaluation of measurement uncertainty and its numerical calculation by a Monte Carlo method
Wübbeler, Gerd; Krystek, Michael; Elster, Clemens
2008-08-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the de facto standard for the evaluation of measurement uncertainty in metrology. Recently, evaluation of measurement uncertainty has been proposed on the basis of probability density functions (PDFs) using a Monte Carlo method. The relation between this PDF approach and the standard method described in the GUM is outlined. The Monte Carlo method required for the numerical calculation of the PDF approach is described and illustrated by its application to two examples. The results obtained by the Monte Carlo method for the two examples are compared to the corresponding results when applying the GUM.
Muelaner, J. E.; Wang, Z.; Keogh, P. S.; Brownell, J.; Fisher, D.
2016-11-01
Understanding the uncertainty of dimensional measurements for large products such as aircraft, spacecraft and wind turbines is fundamental to improving efficiency in these products. Much work has been done to ascertain the uncertainty associated with the main types of instruments used, based on laser tracking and photogrammetry, and the propagation of this uncertainty through networked measurements. Unfortunately this is not sufficient to understand the combined uncertainty of industrial measurements, which include secondary tooling and datum structures used to locate the coordinate frame. This paper presents for the first time a complete evaluation of the uncertainty of large scale industrial measurement processes. Generic analysis and design rules are proven through uncertainty evaluation and optimization for the measurement of a large aero gas turbine engine. This shows how the instrument uncertainty can be considered to be negligible. Before optimization the dominant source of uncertainty was the tooling design, after optimization the dominant source was thermal expansion of the engine; meaning that no further improvement can be made without measurement in a temperature controlled environment. These results will have a significant impact on the ability of aircraft and wind turbines to improve efficiency and therefore reduce carbon emissions, as well as the improved reliability of these products.
Communicating uncertainty in economic evaluations: verifying optimal strategies.
Koffijberg, H; de Wit, G A; Feenstra, T L
2012-01-01
In cost-effectiveness analysis (CEA), it is common to compare a single, new intervention with 1 or more existing interventions representing current practice ignoring other, unrelated interventions. Sectoral CEAs, in contrast, take a perspective in which the costs and effectiveness of all possible interventions within a certain disease area or health care sector are compared to maximize health in a society given resource constraints. Stochastic league tables (SLT) have been developed to represent uncertainty in sectoral CEAs but have 2 shortcomings: 1) the probabilities reflect inclusion of individual interventions and not strategies and 2) data on robustness are lacking. The authors developed an extension of SLT that addresses these shortcomings. Analogous to nonprobabilistic MAXIMIN decision rules, the uncertainty of the performance of strategies in sectoral CEAs may be judged with respect to worst possible outcomes, in terms of health effects obtainable within a given budget. Therefore, the authors assessed robustness of strategies likely to be optimal by performing optimization separately on all samples and on samples yielding worse than expected health benefits. The approach was tested on 2 examples, 1 with independent and 1 with correlated cost and effect data. The method was applicable to the original SLT example and to a new example and provided clear and easily interpretable results. Identification of interventions with robust performance as well as the best performing strategies was straightforward. Furthermore, the robustness of strategies was assessed with a MAXIMIN decision rule. The SLT extension improves the comprehensibility and extends the usefulness of outcomes of SLT for decision makers. Its use is recommended whenever an SLT approach is considered.
Smith, David R.; McGowan, Conor P.; Daily, Jonathan P.; Nichols, James D.; Sweka, John A.; Lyons, James E.
2013-01-01
Application of adaptive management to complex natural resource systems requires careful evaluation to ensure that the process leads to improved decision-making. As part of that evaluation, adaptive policies can be compared with alternative nonadaptive management scenarios. Also, the value of reducing structural (ecological) uncertainty to achieving management objectives can be quantified.A multispecies adaptive management framework was recently adopted by the Atlantic States Marine Fisheries Commission for sustainable harvest of Delaware Bay horseshoe crabs Limulus polyphemus, while maintaining adequate stopover habitat for migrating red knots Calidris canutus rufa, the focal shorebird species. The predictive model set encompassed the structural uncertainty in the relationships between horseshoe crab spawning, red knot weight gain and red knot vital rates. Stochastic dynamic programming was used to generate a state-dependent strategy for harvest decisions given that uncertainty. In this paper, we employed a management strategy evaluation approach to evaluate the performance of this adaptive management framework. Active adaptive management was used by including model weights as state variables in the optimization and reducing structural uncertainty by model weight updating.We found that the value of information for reducing structural uncertainty is expected to be low, because the uncertainty does not appear to impede effective management. Harvest policy responded to abundance levels of both species regardless of uncertainty in the specific relationship that generated those abundances. Thus, the expected horseshoe crab harvest and red knot abundance were similar when the population generating model was uncertain or known, and harvest policy was robust to structural uncertainty as specified.Synthesis and applications. The combination of management strategy evaluation with state-dependent strategies from stochastic dynamic programming was an informative approach to
Souverijns, Niels; Gossart, Alexandra; Lhermitte, Stef; Gorodetskaya, Irina; Kneifel, Stefan; Maahn, Maximilian; Bliven, Francis; van Lipzig, Nicole
2017-04-01
The Antarctic Ice Sheet (AIS) is the largest ice body on earth, having a volume equivalent to 58.3 m global mean sea level rise. Precipitation is the dominant source term in the surface mass balance of the AIS. However, this quantity is not well constrained in both models and observations. Direct observations over the AIS are also not coherent, as they are sparse in space and time and acquisition techniques differ. As a result, precipitation observations stay mostly limited to continent-wide averages based on satellite radar observations. Snowfall rate (SR) at high temporal resolution can be derived from the ground-based radar effective reflectivity factor (Z) using information about snow particle size and shape. Here we present reflectivity snowfall rate relations (Z = aSRb) for the East Antarctic escarpment region using the measurements at the Princess Elisabeth (PE) station and an overview of their uncertainties. A novel technique is developed by combining an optical disdrometer (NASA's Precipitation Imaging Package; PIP) and a vertically pointing 24 GHz FMCW micro rain radar (Metek's MRR) in order to reduce the uncertainty in SR estimates. PIP is used to obtain information about snow particle characteristics and to get an estimate of Z, SR and the Z-SR relation. For PE, located 173 km inland, the relation equals Z = 18SR1.1. The prefactor (a) of the relation is sensitive to the median diameter of the particles. Larger particles, found closer to the coast, lead to an increase of the value of the prefactor. More inland locations, where smaller snow particles are found, obtain lower values for the prefactor. The exponent of the Z-SR relation (b) is insensitive to the median diameter of the snow particles. This dependence of the prefactor of the Z-SR relation to the particle size needs to be taken into account when converting radar reflectivities to snowfall rates over Antarctica. The uncertainty on the Z-SR relations is quantified using a bootstrapping approach
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.
Meszaros, Lorinc; El Serafy, Ghada
2017-04-01
Phytoplankton blooms in coastal ecosystems such as the Wadden Sea may cause mortality of mussels and other benthic organisms. Furthermore, the algal primary production is the base of the food web and therefore it greatly influences fisheries and aquacultures. Consequently, accurate phytoplankton concentration prediction offers ecosystem and economic benefits. Numerical ecosystem models are powerful tools to compute water quality variables including the phytoplankton concentration. Nevertheless, their accuracy ultimately depends on the uncertainty stemming from the external forcings which further propagates and complicates by the non-linear ecological processes incorporated in the ecological model. The Wadden Sea is a shallow, dynamically varying ecosystem with high turbidity and therefore the uncertainty in the Suspended Particulate Matter (SPM) concentration field greatly influences the prediction of water quality variables. Considering the high level of uncertainty in the modelling process, it is advised that an uncertainty estimate should be provided together with a single-valued deterministic model output. Through the use of an ensemble prediction system in the Dutch coastal waters the uncertainty in the modelled chlorophyll-a concentration has been estimated. The input ensemble is generated from perturbed model process parameters and external forcings through Latin hypercube sampling with dependence (LHSD). The simulation is carried out using the Delft3D Generic Ecological Model (GEM) with the advance algal speciation module-BLOOM which is sufficiently well validated for primary production simulation in the southern North Sea. The output ensemble is post-processed to obtain the uncertainty estimate and the results are validated against in-situ measurements and Remote Sensing (RS) data. The spatial uncertainty of chlorophyll-a concentration was derived using the produced ensemble spread maps. *This work has received funding from the European Union's Horizon
J. Florian Wellmann
2013-01-01
The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are ...
Tsai, Frank T.-C.; Elshall, Ahmed S.
2013-09-01
Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.
Evaluation of Spatial Uncertainties In Modeling of Cadastral Systems
Fathi, Morteza; Teymurian, Farideh
2013-04-01
Cadastre plays an essential role in sustainable development especially in developing countries like Iran. A well-developed Cadastre results in transparency of estates tax system, transparency of data of estate, reduction of action before the courts and effective management of estates and natural sources and environment. Multipurpose Cadastre through gathering of other related data has a vital role in civil, economic and social programs and projects. Iran is being performed Cadastre for many years but success in this program is subject to correct geometric and descriptive data of estates. Since there are various sources of data with different accuracy and precision in Iran, some difficulties and uncertainties are existed in modeling of geometric part of Cadastre such as inconsistency between data in deeds and Cadastral map which cause some troubles in execution of cadastre and result in losing national and natural source, rights of nation. Now there is no uniform and effective technical method for resolving such conflicts. This article describes various aspects of such conflicts in geometric part of cadastre and suggests a solution through some modeling tools of GIS.
Pairing in neutron matter: New uncertainty estimates and three-body forces
Drischler, C.; Krüger, T.; Hebeler, K.; Schwenk, A.
2017-02-01
We present solutions of the BCS gap equation in the channels S10 and P32-F32 in neutron matter based on nuclear interactions derived within chiral effective field theory (EFT). Our studies are based on a representative set of nonlocal nucleon-nucleon (NN) plus three-nucleon (3N) interactions up to next-to-next-to-next-to-leading order (N3LO ) as well as local and semilocal chiral NN interactions up to N2LO and N4LO , respectively. In particular, we investigate for the first time the impact of subleading 3N forces at N3LO on pairing gaps and also derive uncertainty estimates by taking into account results for pairing gaps at different orders in the chiral expansion. Finally, we discuss different methods for obtaining self-consistent solutions of the gap equation. Besides the widely used quasilinear method by Khodel et al., we demonstrate that the modified Broyden method is well applicable and exhibits a robust convergence behavior. In contrast to Khodel's method it is based on a direct iteration of the gap equation without imposing an auxiliary potential and is straightforward to implement.
Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation
Tan, Xiaosi
2014-08-05
Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.
Abichou, Tarek; Clark, Jeremy; Tan, Sze; Chanton, Jeffery; Hater, Gary; Green, Roger; Goldsmith, Doug; Barlaz, Morton A; Swan, Nathan
2010-04-01
Landfills represent a source of distributed emissions source over an irregular and heterogeneous surface. In the method termed "Other Test Method-10" (OTM-10), the U.S. Environmental Protection Agency (EPA) has proposed a method to quantify emissions from such sources by the use of vertical radial plume mapping (VRPM) techniques combined with measurement of wind speed to determine the average emission flux per unit area per time from nonpoint sources. In such application, the VRPM is used as a tool to estimate the mass of the gas of interest crossing a vertical plane. This estimation is done by fitting the field-measured concentration spatial data to a Gaussian or some other distribution to define a plume crossing the vertical plane. When this technique is applied to landfill surfaces, the VRPM plane may be within the emitting source area itself. The objective of this study was to investigate uncertainties associated with using OTM-10 for landfills. The spatial variability of emission in the emitting domain can lead to uncertainties of -34 to 190% in the measured flux value when idealistic scenarios were simulated. The level of uncertainty might be higher when the number and locations of emitting sources are not known (typical field conditions). The level of uncertainty can be reduced by improving the layout of the VRPM plane in the field in accordance with an initial survey of the emission patterns. The change in wind direction during an OTM-10 testing setup can introduce an uncertainty of 20% of the measured flux value. This study also provides estimates of the area contributing to flux (ACF) to be used in conjunction with OTM-10 procedures. The estimate of ACF is a function of the atmospheric stability class and has an uncertainty of 10-30%.
Krishnan Kutty, S.; Sekhar, M.; Ruiz, L.; Tomer, S. K.; Bandyopadhyay, S.; Buis, S.; Guerif, M.; Gascuel-odoux, C.
2012-12-01
Groundwater recharge in a semi-arid region is generally low, but could exhibit high spatial variability depending on the soil type and plant cover. The potential recharge (the drainage flux just beneath the root zone) is found to be sensitive to water holding capacity and rooting depth (Rushton, 2003). Simple water balance approaches for recharge estimation often fail to consider the effect of plant cover, growth phases and rooting depth. Hence a crop model based approach might be better suited to assess sensitivity of recharge for various crop-soil combinations in agricultural catchments. Martinez et al. (2009) using a root zone modelling approach to estimate groundwater recharge stressed that future studies should focus on quantifying the uncertainty in recharge estimates due to uncertainty in soil water parameters such as soil layers, field capacity, rooting depth etc. Uncertainty in the parameters may arise due to the uncertainties in retrieved variables (surface soil moisture and leaf area index) from satellite. Hence a good estimate of parameters as well as their uncertainty is essential for a reliable estimate of the potential recharge. In this study we focus on assessing the sensitivity of crop and soil types on the potential recharge by using a generic crop model STICS. The effect of uncertainty in the soil parameters on the estimates of recharge and its uncertainty is investigated. The multi-layer soil water parameters and their uncertainty is estimated by inversion of STICS model using the GLUE approach. Surface soil moisture and LAI either retrieved from microwave remote sensing data or measured in field plots (Sreelash et al., 2012) were found to provide good estimates of the soil water properties and therefore both these data sets were used in this study to estimate the parameters and the potential recharge for a combination of soil-crop systems. These investigations were made in two field experimental catchments. The first one is in the tropical semi
Energy Technology Data Exchange (ETDEWEB)
Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Dooraghi, Mike; Reda, Ibrahim; Kutchenreiter, Mark
2017-03-13
Traceable radiometric data sets are essential for validating climate models, validating satellite-based models for estimating solar resources, and validating solar radiation forecasts. The current state-of-the-art radiometers have uncertainties in the range from 2% - 5% and sometimes more [1]. The National Renewable Energy Laboratory (NREL) and other organizations are identifying uncertainties and improving radiometric measurement performance and developing a consensus methodology for acquiring radiometric data. This study analyzes the impact of differing specifications -- such as cosine response, thermal offset, spectral response, and others -- on the accuracy of radiometric data for various radiometers. The study will also provide insight on how to perform a measurement uncertainty analysis and how to reduce the impact of some of the sources of uncertainties.
Evaluating the uncertainty in optimal crop management placements for bioenergy crop production
Sudheer, K. P.; Krishnan, N.; Chaubey, I.; Raj, C.
2016-12-01
Watershed scale simulation models are used to evaluate various `what if' questions and to make informed decisions. These mathematical models include many empirical and/or non-empirical parameters to represent various eco-hydrological processes. Parameter uncertainty is a major issue in mathematical model simulations, as often the actual parameter values are not available or are measurable. The model parameter uncertainty can affect simulation results and consequent decisions. The objective of the study was to evaluate parameter uncertainty of Soil and Water Assessment Tool (SWAT), and to evaluate potential impacts of uncertainty in model simulations on the decisions suggested for land use planning. An optimization based land use planning case study was developed to identify optimal cropping pattern including bioenergy crops in the St Joseph River watershed, IN, USA. The objective function for land use optimization included biomass production of 3,581 metric tons per day (under thermochemical conversion) minimum feasible production for a biomass processing plant, with minimum biomass production cost and maximum environmental benefits. Parameter uncertainty of the SWAT model is assessed using Shuffled Complex Evolutionary Metropolis Algorithm (SCEM). Five representative parameter sets were selected from the prediction uncertainty interval to represent the parameter uncertainty. The SWAT model was linked with AMALGAM optimizer to derive at an optimal cropping pattern for the watershed. Five sets of land use optimizations were conducted considering the five sets of parameter values, and the effects of parameter uncertainty on optimization results were quantified. The preliminary results showed that the simulation optimization results had some level of uncertainty that needed to be included in making land use decisions for bioenergy crop production.
David M. Bell; Eric J. Ward; A. Christopher Oishi; Ram Oren; Paul G. Flikkema; James S. Clark; David Whitehead
2015-01-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as...
This paper presents a new method based on a statistical approach of estimating the uncertainty in simulating the transport and dispersion of atmospheric pollutants. The application of the method has been demonstrated by using observations and modeling results from a tracer experi...
DEFF Research Database (Denmark)
Cheali, Peam; Gernaey, Krist; Sin, Gürkan
2015-01-01
robust decision-making under uncertainties. One of the results using order-of-magnitude estimates shows that the production of diethyl ether and 1,3-butadiene are the most promising with the lowest economic risks (among the alternatives considered) of 0.24 MM$/a and 4.6 MM$/a, respectively....
Lopez Lopez, P.; Verkade, J.S.; Weerts, A.H.; Solomatine, D.P.
2014-01-01
The present study comprises an intercomparison of different configurations of a statistical post-processor that is used to estimate predictive hydrological uncertainty. It builds on earlier work by Weerts, Winsemius and Verkade (2011; hereafter referred to as WWV2011), who used the quantile
López López, P.; Verkade, J.S.; Weerts, A.H.; Solomatine, D.P.
2014-01-01
The present study comprises an intercomparison of different configurations of a statistical post-processor that is used to estimate predictive hydrological uncertainty. It builds on earlier work by Weerts, Winsemius and Verkade (2011; hereafter referred to as WWV2011), who used the quantile
Saikawa, Eri; Trail, Marcus; Zhong, Min; Wu, Qianru; Young, Cindy L.; Janssens-Maenhout, Greet; Klimont, Zbigniew; Wagner, Fabian; Kurokawa, Jun-ichi; Singh Nagpure, Ajay; Ram Gurjar, Bhola
2017-05-01
Greenhouse gas and air pollutant precursor emissions have been increasing rapidly in India. Large uncertainties exist in emissions inventories and quantification of their uncertainties is essential for better understanding of the linkages among emissions and air quality, climate, and health. We use Monte Carlo methods to assess the uncertainties of the existing carbon dioxide (CO2), carbon monoxide (CO), sulfur dioxide (SO2), nitrogen oxides (NOx), and particulate matter (PM) emission estimates from four source sectors for India. We also assess differences in the existing emissions estimates within the nine subnational regions. We find large uncertainties, higher than the current estimates for all species other than CO, when all the existing emissions estimates are combined. We further assess the impact of these differences in emissions on air quality using a chemical transport model. More efforts are needed to constrain emissions, especially in the Indo-Gangetic Plain, where not only the emissions differences are high but also the simulated concentrations using different inventories. Our study highlights the importance of constraining SO2, NOx, and NH3 emissions for secondary PM concentrations.
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more
Uncertainties in Instantaneous Rainfall Rate Estimates: Satellite vs. Ground-Based Observations
Amitai, E.; Huffman, G. J.; Goodrich, D. C.
2012-12-01
High-resolution precipitation intensities are significant in many fields. For example, hydrological applications such as flood forecasting, runoff accommodation, erosion prediction, and urban hydrological studies depend on an accurate representation of the rainfall that does not infiltrate the soil, which is controlled by the rain intensities. Changes in the rain rate pdf over long periods are important for climate studies. Are our estimates accurate enough to detect such changes? While most evaluation studies are focusing on the accuracy of rainfall accumulation estimates, evaluation of instantaneous rainfall intensity estimates is relatively rare. Can a speceborne radar help in assessing ground-based radar estimates of precipitation intensities or is it the other way around? In this presentation we will provide some insight on the relative accuracy of instantaneous precipitation intensity fields from satellite and ground-based observations. We will examine satellite products such as those from the TRMM Precipitation Radar and those from several passive microwave imagers and sounders by comparing them with advanced high-resolution ground-based products taken at overpass time (snapshot comparisons). The ground based instantaneous rain rate fields are based on in situ measurements (i.e., the USDA/ARS Walnut Gulch dense rain gauge network), remote sensing observations (i.e., the NOAA/NSSL NMQ/Q2 radar-only national mosaic), and multi-sensor products (i.e., high-resolution gauge adjusted radar national mosaics, which we have developed by applying a gauge correction on the Q2 products).
Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.
2007-12-01
The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
Evaluating science arguments: evidence, uncertainty, and argument strength.
Corner, Adam; Hahn, Ulrike
2009-09-01
Public debates about socioscientific issues are increasingly prevalent, but the public response to messages about, for example, climate change, does not always seem to match the seriousness of the problem identified by scientists. Is there anything unique about appeals based on scientific evidence-do people evaluate science and nonscience arguments differently? In an attempt to apply a systematic framework to people's evaluation of science arguments, the authors draw on the Bayesian approach to informal argumentation. The Bayesian approach permits questions about how people evaluate science arguments to be posed and comparisons to be made between the evaluation of science and nonscience arguments. In an experiment involving three separate argument evaluation tasks, the authors investigated whether people's evaluations of science and nonscience arguments differed in any meaningful way. Although some differences were observed in the relative strength of science and nonscience arguments, the evaluation of science arguments was determined by the same factors as nonscience arguments. Our results suggest that science communicators wishing to construct a successful appeal can make use of the Bayesian framework to distinguish strong and weak arguments. 2009 APA, all rights reserved
Verhulst, Kristal R.; Karion, Anna; Kim, Jooil; Salameh, Peter K.; Keeling, Ralph F.; Newman, Sally; Miller, John; Sloop, Christopher; Pongetti, Thomas; Rao, Preeti; Wong, Clare; Hopkins, Francesca M.; Yadav, Vineet; Weiss, Ray F.; Duren, Riley M.; Miller, Charles E.
2017-07-01
We report continuous surface observations of carbon dioxide (CO2) and methane (CH4) from the Los Angeles (LA) Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO) and offshore on San Clemente Island (SCI), one continental site located in Victorville (VIC), in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO) in the San Gabriel Mountains. We find that a local marine background can be established to within ˜ 1 ppm CO2 and ˜ 10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California) site near downtown LA exhibits median hourly enhancements of ˜ 20 ppm CO2 and ˜ 150 ppb CH4 during 2015 as well as ˜ 15 ppm CO2 and ˜ 80 ppb CH4 during mid-afternoon hours (12:00-16:00 LT, local time), which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014). The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much larger than the measurement uncertainty. The background uncertainty for the marine
Pulles, M.P.J.; Kok, H.; Quass, U.
2006-01-01
This study uses an improved emission inventory model to assess the uncertainties in emissions of dioxins and furans associated with both knowledge on the exact technologies and processes used, and with the uncertainties of both activity data and emission factors. The annual total emissions for the
Addressing uncertainties in estimates of recoverable gas for underexplored Shale gas Basins
Heege, J.H. ter; Zijp, M.H.A.A.; Bruin, G. de; Veen, J.H. ten
2014-01-01
Uncertainties in upfront predictions of hydraulic fracturing and gas production of underexplored shale gas targets are important as often large potential resources are deduced based on limited available data. In this paper, uncertainties are quantified by using normal distributions of different
Comparison of uncertainties in carbon sequestration estimates for a tropical and a temperate forest
Nabuurs, G.J.; Putten, van B.; Knippers, T.S.; Mohren, G.M.J.
2008-01-01
We compare uncertainty through sensitivity and uncertainty analyses of the modelling framework CO2FIX V.2. We apply the analyses to a Central European managed Norway spruce stand and a secondary tropical forest in Central America. Based on literature and experience we use three standard groups to
AUTHOR|(INSPIRE)INSPIRE-00534683; The ATLAS collaboration
2016-01-01
The jet energy scale (JES) uncertainty is estimated using different methods at different pT ranges. In situ techniques exploiting the pT balance between a jet and a reference object (e.g. Z or gamma) are used at lower pT, but at very high pT (> 2.5 TeV) there is not enough statistics for in-situ techniques. The JES uncertainty at high-pT is important in several searches for new phenomena, e.g. the dijet resonance and angular searches. In the highest pT range, the JES uncertainty is estimated using the calorimeter response to single hadrons. In this method, jets are treated as a superposition of energy depositions of single particles. An uncertainty is applied to each energy depositions belonging to the particles within the jet, and propagated to the final jet energy scale. This poster presents the JES uncertainty found with this method at sqrt(s) = 8 TeV and its developments.
Energy Technology Data Exchange (ETDEWEB)
Ohnishi, S., E-mail: ohnishi@nmri.go.jp [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan); Thornton, B. [Institute of Industrial Science, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Kamada, S.; Hirao, Y.; Ura, T.; Odano, N. [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan)
2016-05-21
Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.
Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O
2015-02-21
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
Added Value of uncertainty Estimates of SOurce term and Meteorology (AVESOME)
DEFF Research Database (Denmark)
Sørensen, Jens Havskov; Schönfeldt, Fredrik; Sigg, Robert
uncertainty in atmospheric dispersion model forecasting stemming from both the source term and the meteorological data is examined. Ways to implement the uncertainties of forecasting in DSSs, and the impacts on real-time emergency management are described. The proposed methodology allows for efficient real-time.......g. at national meteorological services, the proposed methodology is feasible for real-time use, thereby adding value to decision support. In the recent NKS-B projects MUD, FAUNA and MESO, the implications of meteorological uncertainties for nuclear emergency preparedness and management have been studied......, and means for operational real-time assessment of the uncertainties in a nuclear DSS have been described and demonstrated. In AVESOME, we address the uncertainty of the radionuclide source term, i.e. the amounts of radionuclides released and the temporal evolution of the release. Furthermore, the combined...
Directory of Open Access Journals (Sweden)
B. Ford
2016-03-01
Full Text Available The negative impacts of fine particulate matter (PM2.5 exposure on human health are a primary motivator for air quality research. However, estimates of the air pollution health burden vary considerably and strongly depend on the data sets and methodology. Satellite observations of aerosol optical depth (AOD have been widely used to overcome limited coverage from surface monitoring and to assess the global population exposure to PM2.5 and the associated premature mortality. Here we quantify the uncertainty in determining the burden of disease using this approach, discuss different methods and data sets, and explain sources of discrepancies among values in the literature. For this purpose we primarily use the MODIS satellite observations in concert with the GEOS-Chem chemical transport model. We contrast results in the United States and China for the years 2004–2011. Using the Burnett et al. (2014 integrated exposure response function, we estimate that in the United States, exposure to PM2.5 accounts for approximately 2 % of total deaths compared to 14 % in China (using satellite-based exposure, which falls within the range of previous estimates. The difference in estimated mortality burden based solely on a global model vs. that derived from satellite is approximately 14 % for the US and 2 % for China on a nationwide basis, although regionally the differences can be much greater. This difference is overshadowed by the uncertainty in the methodology for deriving PM2.5 burden from satellite observations, which we quantify to be on the order of 20 % due to uncertainties in the AOD-to-surface-PM2.5 relationship, 10 % due to the satellite observational uncertainty, and 30 % or greater uncertainty associated with the application of concentration response functions to estimated exposure.
Evaluating concentration estimation errors in ELISA microarray experiments
Directory of Open Access Journals (Sweden)
Anderson Kevin K
2005-01-01
Full Text Available Abstract Background Enzyme-linked immunosorbent assay (ELISA is a standard immunoassay to estimate a protein's concentration in a sample. Deploying ELISA in a microarray format permits simultaneous estimation of the concentrations of numerous proteins in a small sample. These estimates, however, are uncertain due to processing error and biological variability. Evaluating estimation error is critical to interpreting biological significance and improving the ELISA microarray process. Estimation error evaluation must be automated to realize a reliable high-throughput ELISA microarray system. In this paper, we present a statistical method based on propagation of error to evaluate concentration estimation errors in the ELISA microarray process. Although propagation of error is central to this method and the focus of this paper, it is most effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization, and statistical diagnostics when evaluating ELISA microarray concentration estimation errors. Results We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of concentration estimation errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error. We summarize the results with a simple, three-panel diagnostic visualization featuring a scatterplot of the standard data with logistic standard curve and 95% confidence intervals, an annotated histogram of sample measurements, and a plot of the 95% concentration coefficient of variation, or relative error, as a function of concentration. Conclusions This statistical method should be of value in the rapid evaluation and quality control of high
Evaluating concentration estimation errors in ELISA microarray experiments.
Daly, Don Simone; White, Amanda M; Varnum, Susan M; Anderson, Kevin K; Zangar, Richard C
2005-01-26
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to estimate a protein's concentration in a sample. Deploying ELISA in a microarray format permits simultaneous estimation of the concentrations of numerous proteins in a small sample. These estimates, however, are uncertain due to processing error and biological variability. Evaluating estimation error is critical to interpreting biological significance and improving the ELISA microarray process. Estimation error evaluation must be automated to realize a reliable high-throughput ELISA microarray system. In this paper, we present a statistical method based on propagation of error to evaluate concentration estimation errors in the ELISA microarray process. Although propagation of error is central to this method and the focus of this paper, it is most effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization, and statistical diagnostics when evaluating ELISA microarray concentration estimation errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of concentration estimation errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error. We summarize the results with a simple, three-panel diagnostic visualization featuring a scatterplot of the standard data with logistic standard curve and 95% confidence intervals, an annotated histogram of sample measurements, and a plot of the 95% concentration coefficient of variation, or relative error, as a function of concentration. This statistical method should be of value in the rapid evaluation and quality control of high-throughput ELISA microarray analyses. Applying propagation of error to
Statistical evaluation of the influence of the uncertainty budget on B-spline curve approximation
Zhao, Xin; Alkhatib, Hamza; Kargoll, Boris; Neumann, Ingo
2017-12-01
In the field of engineering geodesy, terrestrial laser scanning (TLS) has become a popular method for detecting deformations. This paper analyzes the influence of the uncertainty budget on free-form curves modeled by B-splines. Usually, free-form estimation is based on scanning points assumed to have equal accuracies, which is not realistic. Previous findings demonstrate that the residuals still contain random and systematic uncertainties caused by instrumental, object-related and atmospheric influences. In order to guarantee the quality of derived estimates, it is essential to be aware of all uncertainties and their impact on the estimation. In this paper, a more detailed uncertainty budget is considered, in the context of the "Guide to the Expression of Uncertainty in Measurement" (GUM), which leads to a refined, heteroskedastic variance covariance matrix (VCM) of TLS measurements. Furthermore, the control points of B-spline curves approximating a measured bridge are estimated. Comparisons are made between the estimated B-spline curves using on the one hand a homoskedastic VCM and on the other hand the refined VCM. To assess the statistical significance of the differences displayed by the estimates for the two stochastic models, a nested model misspecification test and a non-nested model selection test are described and applied. The test decisions indicate that the homoskedastic VCM should be replaced by a heteroskedastic VCM in the direction of the suggested VCM. However, the tests also indicate that the considered VCM is still inadequate in light of the given data set and should therefore be improved.
The effect of dose calculation uncertainty on the evaluation of radiotherapy plans.
Keall, P J; Siebers, J V; Jeraj, R; Mohan, R
2000-03-01
Monte Carlo dose calculations will potentially reduce systematic errors that may be present in currently used dose calculation algorithms. However, Monte Carlo calculations inherently contain random errors, or statistical uncertainty, the level of which decreases inversely with the square root of computation time. Our purpose in this study was to determine the level of uncertainty at which a lung treatment plan is clinically acceptable. The evaluation methods to decide acceptability were visual examination of both isodose lines on CT scans and dose volume histograms (DVHs), and reviewing calculated biological indices. To study the effect of systematic and/or random errors on treatment plan evaluation, a simulated "error-free" reference plan was used as a benchmark. The relationship between Monte Carlo statistical uncertainty and dose was found to be approximately proportional to the square root of the dose. Random and systematic errors were applied to a calculated lung plan, creating dose distributions with statistical uncertainties of between 0% and 16% (1 s.d.) at the maximum dose point and also distributions with systematic errors of -16% to 16% at the maximum dose point. Critical structure DVHs and biological indices are less sensitive to calculation uncertainty than those of the target. Systematic errors affect plan evaluation accuracy significantly more than random errors, suggesting that Monte Carlo dose calculation will improve outcomes in radiotherapy. A statistical uncertainty of 2% or less does not significantly affect isodose lines, DVHs, or biological indices.
Hagen, S. C.; Braswell, B. H.; Linder, E.; Frolking, S.; Richardson, A. D.; Hollinger, D. Y.
2006-04-01
We present an uncertainty analysis of gross ecosystem carbon exchange (GEE) estimates derived from 7 years of continuous eddy covariance measurements of forest-atmosphere CO2 fluxes at Howland Forest, Maine, USA. These data, which have high temporal resolution, can be used to validate process modeling analyses, remote sensing assessments, and field surveys. However, separation of tower-based net ecosystem exchange (NEE) into its components (respiration losses and photosynthetic uptake) requires at least one application of a model, which is usually a regression model fitted to nighttime data and extrapolated for all daytime intervals. In addition, the existence of a significant amount of missing data in eddy flux time series requires a model for daytime NEE as well. Statistical approaches for analytically specifying prediction intervals associated with a regression require, among other things, constant variance of the data, normally distributed residuals, and linearizable regression models. Because the NEE data do not conform to these criteria, we used a Monte Carlo approach (bootstrapping) to quantify the statistical uncertainty of GEE estimates and present this uncertainty in the form of 90% prediction limits. We explore two examples of regression models for modeling respiration and daytime NEE: (1) a simple, physiologically based model from the literature and (2) a nonlinear regression model based on an artificial neural network. We find that uncertainty at the half-hourly timescale is generally on the order of the observations themselves (i.e., ˜100%) but is much less at annual timescales (˜10%). On the other hand, this small absolute uncertainty is commensurate with the interannual variability in estimated GEE. The largest uncertainty is associated with choice of model type, which raises basic questions about the relative roles of models and data.
Using hybrid method to evaluate the green performance in uncertainty.
Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping
2011-04-01
Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.
Directory of Open Access Journals (Sweden)
L. Altarejos-García
2012-07-01
Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.
Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua
2018-01-01
Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
The grey relational approach for evaluating measurement uncertainty with poor information
Luo, Zai; Wang, Yanqing; Zhou, Weihu; Wang, Zhongyu
2015-12-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the master document for measurement uncertainty evaluation. However, the GUM may encounter problems and does not work well when the measurement data have poor information. In most cases, poor information means a small data sample and an unknown probability distribution. In these cases, the evaluation of measurement uncertainty has become a bottleneck in practical measurement. To solve this problem, a novel method called the grey relational approach (GRA), different from the statistical theory, is proposed in this paper. The GRA does not require a large sample size or probability distribution information of the measurement data. Mathematically, the GRA can be divided into three parts. Firstly, according to grey relational analysis, the grey relational coefficients between the ideal and the practical measurement output series are obtained. Secondly, the weighted coefficients and the measurement expectation function will be acquired based on the grey relational coefficients. Finally, the measurement uncertainty is evaluated based on grey modeling. In order to validate the performance of this method, simulation experiments were performed and the evaluation results show that the GRA can keep the average error around 5%. Besides, the GRA was also compared with the grey method, the Bessel method, and the Monte Carlo method by a real stress measurement. Both the simulation experiments and real measurement show that the GRA is appropriate and effective to evaluate the measurement uncertainty with poor information.
Zunino, Andrea; Mosegaard, Klaus
2017-04-01
Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.
Uncertainty in global groundwater storage estimates in a Total Groundwater Stress framework
Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Swenson, Sean; Rodell, Matthew
2015-01-01
Abstract Groundwater is a finite resource under continuous external pressures. Current unsustainable groundwater use threatens the resilience of aquifer systems and their ability to provide a long‐term water source. Groundwater storage is considered to be a factor of groundwater resilience, although the extent to which resilience can be maintained has yet to be explored in depth. In this study, we assess the limit of groundwater resilience in the world's largest groundwater systems with remote sensing observations. The Total Groundwater Stress (TGS) ratio, defined as the ratio of total storage to the groundwater depletion rate, is used to explore the timescales to depletion in the world's largest aquifer systems and associated groundwater buffer capacity. We find that the current state of knowledge of large‐scale groundwater storage has uncertainty ranges across orders of magnitude that severely limit the characterization of resilience in the study aquifers. Additionally, we show that groundwater availability, traditionally defined as recharge and redefined in this study as total storage, can alter the systems that are considered to be stressed versus unstressed. We find that remote sensing observations from NASA's Gravity Recovery and Climate Experiment can assist in providing such information at the scale of a whole aquifer. For example, we demonstrate that a groundwater depletion rate in the Northwest Sahara Aquifer System of 2.69 ± 0.8 km3/yr would result in the aquifer being depleted to 90% of its total storage in as few as 50 years given an initial storage estimate of 70 km3. PMID:26900184
Energy Technology Data Exchange (ETDEWEB)
Kim, Joo Yeon; Lee, Seung Hyun; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)
2016-06-15
Any real application of Bayesian inference must acknowledge that both prior distribution and likelihood function have only been specified as more or less convenient approximations to whatever the analyzer's true belief might be. If the inferences from the Bayesian analysis are to be trusted, it is important to determine that they are robust to such variations of prior and likelihood as might also be consistent with the analyzer's stated beliefs. The robust Bayesian inference was applied to atmospheric dispersion assessment using Gaussian plume model. The scopes of contaminations were specified as the uncertainties of distribution type and parametric variability. The probabilistic distribution of model parameters was assumed to be contaminated as the symmetric unimodal and unimodal distributions. The distribution of the sector-averaged relative concentrations was then calculated by applying the contaminated priors to the model parameters. The sector-averaged concentrations for stability class were compared by applying the symmetric unimodal and unimodal priors, respectively, as the contaminated one based on the class of ε-contamination. Though ε was assumed as 10%, the medians reflecting the symmetric unimodal priors were nearly approximated within 10% compared with ones reflecting the plausible ones. However, the medians reflecting the unimodal priors were approximated within 20% for a few downwind distances compared with ones reflecting the plausible ones. The robustness has been answered by estimating how the results of the Bayesian inferences are robust to reasonable variations of the plausible priors. From these robust inferences, it is reasonable to apply the symmetric unimodal priors for analyzing the robustness of the Bayesian inferences.
Cummins, P. R.; Benavente, R. F.; Dettmer, J.; Williamson, A.
2016-12-01
Rapid estimation of the slip distribution for large earthquakes can be useful for the early phases of emergency response, in rapid impact assessment and tsunami early warning. Model parameter uncertainties can be crucial for meaningful interpretation of such slip models, but they are often ignored. However, estimation of uncertainty in linear finite fault inversion is difficult because of the positivity constraints that are almost always applied. We have shown in previous work that positivity can be realized by imposing a prior such that the logs of each subfault scalar moment are smoothly distributed on the fault surface, and each scalar moment is intrinsically non-negative while the posterior PDF can still be approximated as Gaussian. The inversion is nonlinear, but we showed that the most probable solution can be found by iterative methods that are not computationally demanding. In addition, the posterior covariance matrix (which provides uncertainties) can be estimated from the most probable solution, using an analytic expression for the Hessian of the cost function. We have studied this approach previously for synthetic W-phase data and showed that a first order estimation of the uncertainty in the slip model can be obtained.Here we apply this method to seismic W-phase recorded following the 2015, Mw 8.3 Illapel earthquake. Our results show a slip distrubtion with maximum slip near the subduction zone trench axis, and having uncertainties that scale roughly with the slip value. We also consider application of this method to multiple data types: seismic W-phase, geodetic, and tsunami.
Ershadi, Ali
2013-05-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.
Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David; Arkle, Robert; McIlroy, Susan
2017-01-01
Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure
Evaluating uncertainty to strengthen epidemiologic data for use in human health risk assessments.
Burns, Carol J; Wright, J Michael; Pierson, Jennifer B; Bateson, Thomas F; Burstyn, Igor; Goldstein, Daniel A; Klaunig, James E; Luben, Thomas J; Mihlan, Gary; Ritter, Leonard; Schnatter, A Robert; Symons, J Morel; Yi, Kun Don
2014-11-01
There is a recognized need to improve the application of epidemiologic data in human health risk assessment especially for understanding and characterizing risks from environmental and occupational exposures. Although there is uncertainty associated with the results of most epidemiologic studies, techniques exist to characterize uncertainty that can be applied to improve weight-of-evidence evaluations and risk characterization efforts. This report derives from a Health and Environmental Sciences Institute (HESI) workshop held in Research Triangle Park, North Carolina, to discuss the utility of using epidemiologic data in risk assessments, including the use of advanced analytic methods to address sources of uncertainty. Epidemiologists, toxicologists, and risk assessors from academia, government, and industry convened to discuss uncertainty, exposure assessment, and application of analytic methods to address these challenges. Several recommendations emerged to help improve the utility of epidemiologic data in risk assessment. For example, improved characterization of uncertainty is needed to allow risk assessors to quantitatively assess potential sources of bias. Data are needed to facilitate this quantitative analysis, and interdisciplinary approaches will help ensure that sufficient information is collected for a thorough uncertainty evaluation. Advanced analytic methods and tools such as directed acyclic graphs (DAGs) and Bayesian statistical techniques can provide important insights and support interpretation of epidemiologic data. The discussions and recommendations from this workshop demonstrate that there are practical steps that the scientific community can adopt to strengthen epidemiologic data for decision making.
A new MC-based method to evaluate the fission fraction uncertainty at reactor neutrino experiment
Ma, X B; Chen, Y X
2016-01-01
Uncertainties of fission fraction is an important uncertainty source for the antineutrino flux prediction in a reactor antineutrino experiment. A new MC-based method of evaluating the covariance coefficients between isotopes was proposed. It was found that the covariance coefficients will varying with reactor burnup and which may change from positive to negative because of fissioning balance effect, for example, the covariance coefficient between $^{235}$U and $^{239}$Pu changes from 0.15 to -0.13. Using the equation between fission fraction and atomic density, the consistent of uncertainty of fission fraction and the covariance matrix were obtained. The antineutrino flux uncertainty is 0.55\\% which does not vary with reactor burnup, and the new value is about 8.3\\% smaller.
Energy Technology Data Exchange (ETDEWEB)
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.
National Aeronautics and Space Administration — This article discusses several aspects of uncertainty represen- tation and management for model-based prognostics method- ologies based on our experience with Kalman...
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
Research on uncertainty evaluation measure and method of voltage sag severity
Liu, X. N.; Wei, J.; Ye, S. Y.; Chen, B.; Long, C.
2018-01-01
Voltage sag is an inevitable serious problem of power quality in power system. This paper focuses on a general summarization and reviews on the concepts, indices and evaluation methods about voltage sag severity. Considering the complexity and uncertainty of influencing factors, damage degree, the characteristics and requirements of voltage sag severity in the power source-network-load sides, the measure concepts and their existing conditions, evaluation indices and methods of voltage sag severity have been analyzed. Current evaluation techniques, such as stochastic theory, fuzzy logic, as well as their fusion, are reviewed in detail. An index system about voltage sag severity is provided for comprehensive study. The main aim of this paper is to propose thought and method of severity research based on advanced uncertainty theory and uncertainty measure. This study may be considered as a valuable guide for researchers who are interested in the domain of voltage sag severity.
Directory of Open Access Journals (Sweden)
Haileyesus B. Endeshaw
2017-11-01
Full Text Available Failure prediction of wind turbine gearboxes (WTGs is especially important since the maintenance of these components is not only costly but also causes the longest downtime. One of the most common causes of the premature fault of WTGs is attributed to the fatigue fracture of gear teeth due to fluctuating and cyclic torque, resulting from stochastic wind loading, transmitted to the gearbox. Moreover, the fluctuation of the torque, as well as the inherent uncertainties of the material properties, results in uncertain life prediction for WTGs. It is therefore essential to quantify these uncertainties in the life estimation of gears. In this paper, a framework, constituted by a dynamic model of a one-stage gearbox, a finite element method, and a degradation model for the estimation of fatigue crack propagation in gear, is presented. Torque time history data of a wind turbine rotor was scaled and used to simulate the stochastic characteristic of the loading and uncertainties in the material constants of the degradation model were also quantified. It was demonstrated that uncertainty quantification of load and material constants provides a reasonable estimation of the distribution of the crack length in the gear tooth at any time step.
Blom, Mozes P K; Bragg, Jason G; Potter, Sally; Moritz, Craig
2017-05-01
Accurate gene tree inference is an important aspect of species tree estimation in a summary-coalescent framework. Yet, in empirical studies, inferred gene trees differ in accuracy due to stochastic variation in phylogenetic signal between targeted loci. Empiricists should, therefore, examine the consistency of species tree inference, while accounting for the observed heterogeneity in gene tree resolution of phylogenomic data sets. Here, we assess the impact of gene tree estimation error on summary-coalescent species tree inference by screening ${\\sim}2000$ exonic loci based on gene tree resolution prior to phylogenetic inference. We focus on a phylogenetically challenging radiation of Australian lizards (genus Cryptoblepharus, Scincidae) and explore effects on topology and support. We identify a well-supported topology based on all loci and find that a relatively small number of high-resolution gene trees can be sufficient to converge on the same topology. Adding gene trees with decreasing resolution produced a generally consistent topology, and increased confidence for specific bipartitions that were poorly supported when using a small number of informative loci. This corroborates coalescent-based simulation studies that have highlighted the need for a large number of loci to confidently resolve challenging relationships and refutes the notion that low-resolution gene trees introduce phylogenetic noise. Further, our study also highlights the value of quantifying changes in nodal support across locus subsets of increasing size (but decreasing gene tree resolution). Such detailed analyses can reveal anomalous fluctuations in support at some nodes, suggesting the possibility of model violation. By characterizing the heterogeneity in phylogenetic signal among loci, we can account for uncertainty in gene tree estimation and assess its effect on the consistency of the species tree estimate. We suggest that the evaluation of gene tree resolution should be incorporated
Evaluation of uncertainties in regional climate change simulations
DEFF Research Database (Denmark)
Pan, Z.; Christensen, J. H.; Arritt, R. W.
2001-01-01
correlation for climate change suggests that even though future precipitation is projected to increase, its overall continental-scale spatial pattern is expected to remain relatively constant. The low RCM performance correlation shows a modeling challenge to reproduce observed spatial precipitation patterns.......We have run two regional climate models (RCMs) forced by three sets of initial and boundary conditions to form a 2x3 suite of 10-year climate simulations for the continental United States at approximately 50 km horizontal resolution. The three sets of driving boundary conditions are a reanalysis...... different geographic locations. However, both models missed heavy cool-season precipitation in the lower Mississippi River basin, a seemingly common model defect. Various simulation biases (differences) produced by the RCMs are evaluated based on the 2x3 experiment set in addition to comparisons...
Directory of Open Access Journals (Sweden)
Jalid Abdelilah
2016-01-01
Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-12-01
Full Text Available It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Park, Jae Phil; Park, Chanseok; Cho, Jongweon; Bahn, Chi Bum
2016-12-23
It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals) could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Werner, Micha; Westerhoff, Rogier; Moore, Catherine
2017-04-01
constructed using the same base data and forced with the VCSN precipitation dataset. Results of the comparison of the rainfall products show that there are significant differences in precipitation volume between the forcing products; in the order of 20% at most points. Even more significant differences can be seen, however, in the distribution of precipitation. For the VCSN data wet days (defined as >0.1mm precipitation) occur on some 20-30% of days (depending on location). This is reasonably reflected in the TRMM and CHIRPS data, while for the re-analysis based products some 60%to 80% of days are wet, albeit at lower intensities. These differences are amplified in the recharge estimates. At most points, volumetric differences are in the order of 40-60%, though difference may range into several orders of magnitude. The frequency distributions of recharge also differ significantly, with recharge over 0.1 mm occurring on 4-6% of days for the VCNS, CHIRPS, and TRMM datasets, but up to the order of 12% of days for the re-analysis data. Comparison against the lysimeter data show estimates to be reasonable, in particular for the reference datasets. Surprisingly some estimates of the lower resolution re-analysis datasets are reasonable, though this does seem to be due to lower recharge being compensated by recharge occurring more frequently. These results underline the importance of correct representation of rainfall volumes, as well as of distribution, particularly when evaluating possible changes to for example changes in precipitation intensity and volume. This holds for precipitation data derived from satellite based and re-analysis products, but also for interpolated data from gauges, where the distribution of intensities is strongly influenced by the interpolation process.
Brown, Steven G; Eberly, Shelly; Paatero, Pentti; Norris, Gary A
2015-06-15
The new version of EPA's positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP). These methods capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. To demonstrate the utility of the EE methods, results are presented for three data sets: (1) speciated PM2.5 data from a chemical speciation network (CSN) site in Sacramento, California (2003-2009); (2) trace metal, ammonia, and other species in water quality samples taken at an inline storage system (ISS) in Milwaukee, Wisconsin (2006); and (3) an organic aerosol data set from high-resolution aerosol mass spectrometer (HR-AMS) measurements in Las Vegas, Nevada (January 2008). We present an interpretation of EE diagnostics for these data sets, results from sensitivity tests of EE diagnostics using additional and fewer factors, and recommendations for reporting PMF results. BS-DISP and BS are found useful in understanding the uncertainty of factor profiles; they also suggest if the data are over-fitted by specifying too many factors. DISP diagnostics were consistently robust, indicating its use for understanding rotational uncertainty and as a first step in assessing a solution's viability. The uncertainty of each factor's identifying species is shown to be a useful gauge for evaluating multiple solutions, e.g., with a different number of factors. Published by Elsevier B.V.
Rauniyar, S. P.; Protat, A.; Kanamori, H.
2017-05-01
This study investigates the regional and seasonal rainfall rate retrieval uncertainties within nine state-of-the-art satellite-based rainfall products over the Maritime Continent (MC) region. The results show consistently larger differences in mean daily rainfall among products over land, especially over mountains and along coasts, compared to over ocean, by about 20% for low to medium rain rates and 5% for heavy rain rates. However, rainfall differences among the products do not exhibit any seasonal dependency over both surface types (land and ocean) of the MC region. The differences between products largely depends on the rain rate itself, with a factor 2 difference for light rain and 30% for intermediate and high rain rates over ocean. The rain-rate products dominated by microwave measurements showed less spread among themselves over ocean compared to the products dominated by infrared measurements. Conversely, over land, the rain gauge-adjusted post-real-time products dominated by microwave measurements produced the largest spreads, due to the usage of different gauge analyses for the bias corrections. Intercomparisons of rainfall characteristics of these products revealed large discrepancies in detecting the frequency and intensity of rainfall. These satellite products are finally evaluated at subdaily, daily, monthly, intraseasonal, and seasonal temporal scales against high-quality gridded rainfall observations in the Sarawak (Malaysia) region for the 4 year period 2000-2003. No single satellite-based rainfall product clearly outperforms the other products at all temporal scales. General guidelines are provided for selecting a product that could be best suited for a particular application and/or temporal resolution.
Psychometric Evaluation of a New Instrument to Measure Uncertainty in Children with Cancer
Stewart, Janet L.; Lynn, Mary R.; Mishel, Merle H.
2010-01-01
Background Although uncertainty has been characterized as a major stressor for children with cancer, it has not been studied systematically. Objectives To describe the development and initial psychometric evaluation of a measure of uncertainty in school-aged children and adolescents with cancer. Methods Interview data from the first author’s qualitative study of uncertainty in children undergoing cancer treatment were used to generate 22 items for the Uncertainty Scale for Kids (USK), which were evaluated for content validity by expert panels of children with cancer and experienced clinicians (Stewart, Lynn, & Mishel, 2005). Reliability and validity were evaluated in a sample of 72 children aged 8 to 17 years undergoing cancer treatment. Results The USK items underwent minor revision following input from content validity experts and all 22 were retained for testing. The USK demonstrated strong reliability (Cronbach’s alpha = .94, test-retest r = .64, p = .005) and preliminary evidence for validity was supported by significant associations between USK scores and cancer knowledge, complexity of treatment, and anxiety and depression. Exploratory factor analysis yielded 2 factors, not knowing how serious the illness is and not knowing what will happen when, which explained 50.4% of the variance. Discussion The USK, developed from the perspective of children, performed well in the initial application, demonstrating strong reliability and preliminary evidence for construct and discriminant validity. It holds considerable promise for moving the research forward on uncertainty in childhood cancer. PMID:20216014
Evaluating Expert Estimators Based on Elicited Competences
Directory of Open Access Journals (Sweden)
Hrvoje Karna
2015-07-01
Full Text Available Utilization of expert effort estimation approach shows promising results when it is applied to software development process. It is based on judgment and decision making process and due to comparative advantages extensively used especially in situations when classic models cannot be accounted for. This becomes even more accentuated in today’s highly dynamical project environment. Confronted with these facts companies are placing ever greater focus on their employees, specifically on their competences. Competences are defined as knowledge, skills and abilities required to perform job assignments. During effort estimation process different underlying expert competences influence the outcome i.e. judgments they express. Special problem here is the elicitation, from an input collection, of those competences that are responsible for accurate estimates. Based on these findings different measures can be taken to enhance estimation process. The approach used in study presented in this paper was targeted at elicitation of expert estimator competences responsible for production of accurate estimates. Based on individual competences scores resulting from performed modeling experts were ranked using weighted scoring method and their performance evaluated. Results confirm that experts with higher scores in competences identified by applied models in general exhibit higher accuracy during estimation process. For the purpose of modeling data mining methods were used, specifically the multilayer perceptron neural network and the classification and regression decision tree algorithms. Among other, applied methods are suitable for the purpose of elicitation as in a sense they mimic the ways human brains operate. Data used in the study was collected from real projects in the company specialized for development of IT solutions in telecom domain. The proposed model, applied methodology for elicitation of expert competences and obtained results give evidence that in
Briggs, Andrew H; Weinstein, Milton C; Fenwick, Elisabeth A L; Karnon, Jonathan; Sculpher, Mark J; Paltiel, A David
2012-01-01
A model's purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value-of-information analysis. The article also makes extensive recommendations around the reporting of uncertainty, both in terms of deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis.
Characterization of uncertainty in Bayesian estimation using sequential Monte Carlo methods
Aoki, E.H.
2013-01-01
In estimation problems, accuracy of the estimates of the quantities of interest cannot be taken for granted. This means that estimation errors are expected, and a good estimation algorithm should be able not only to compute estimates that are optimal in some sense, but also provide meaningful
Directory of Open Access Journals (Sweden)
Stephen M Petrie
Full Text Available For in vivo studies of influenza dynamics where within-host measurements are fit with a mathematical model, infectivity assays (e.g. 50% tissue culture infectious dose; TCID50 are often used to estimate the infectious virion concentration over time. Less frequently, measurements of the total (infectious and non-infectious viral particle concentration (obtained using real-time reverse transcription-polymerase chain reaction; rRT-PCR have been used as an alternative to infectivity assays. We investigated the degree to which measuring both infectious (via TCID50 and total (via rRT-PCR viral load allows within-host model parameters to be estimated with greater consistency and reduced uncertainty, compared with fitting to TCID50 data alone. We applied our models to viral load data from an experimental ferret infection study. Best-fit parameter estimates for the "dual-measurement" model are similar to those from the TCID50-only model, with greater consistency in best-fit estimates across different experiments, as well as reduced uncertainty in some parameter estimates. Our results also highlight how variation in TCID50 assay sensitivity and calibration may hinder model interpretation, as some parameter estimates systematically vary with known uncontrolled variations in the assay. Our techniques may aid in drawing stronger quantitative inferences from in vivo studies of influenza virus dynamics.
Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.
2010-09-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.
Obaton, A.-F.; Lebenberg, J.; Fischer, N.; Guimier, S.; Dubard, J.
2007-04-01
The measurement uncertainty of the spectral irradiance of an UV lamp is computed by using the law of propagation of uncertainty (LPU) as described in the 'Guide to the Expression of Uncertainty in Measurement' (GUM), considering only a first-order Taylor series approximation. Since the spectral irradiance model displays a non-linear feature and since an asymmetric probability density function (PDF) is assigned to some input quantities, the usage of another process was required to validate the LPU method. The propagation of distributions using Monte Carlo (MC) simulations, as depicted in the supplement of the GUM (GUM-S1), was found to be a relevant alternative solution. The validation of the LPU method by the MC method is discussed with regard to PDF choices, and the benefit of the MC method over the LPU method is illustrated.
Radespiel, Rolf; Hemsch, Michael J.
2007-01-01
The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.
Tauxe, J.; Black, P.; Carilli, J.; Catlett, K.; Crowe, B.; Hooten, M.; Rawlinson, S.; Schuh, A.; Stockton, T.; Yucel, V.
2002-12-01
The disposal of low-level radioactive waste (LLW) in the United States (U.S.) is a highly regulated undertaking. The U.S. Department of Energy (DOE), itself a large generator of such wastes, requires a substantial amount of analysis and assessment before permitting disposal of LLW at its facilities. One of the requirements that must be met in assessing the performance of a disposal site and technology is that a Performance Assessment (PA) demonstrate "reasonable expectation" that certain performance objectives, such as dose to a hypothetical future receptor, not be exceeded. The phrase "reasonable expectation" implies recognition of uncertainty in the assessment process. In order for this uncertainty to be quantified and communicated to decision makers, the PA computer model must accept probabilistic (uncertain) input (parameter values) and produce results which reflect that uncertainty as it is propagated through the model calculations. The GoldSim modeling software was selected for the task due to its unique facility with both probabilistic analysis and radioactive contaminant transport. Probabilistic model parameters range from water content and other physical properties of alluvium to the activity of radionuclides disposed to the amount of time a future resident might be expected to spend tending a garden. Although these parameters govern processes which are defined in isolation as rather simple differential equations, the complex interaction of couple processes makes for a highly nonlinear system with often unanticipated results. The decision maker has the difficult job of evaluating the uncertainty of modeling results in the context of granting permission for LLW disposal. This job also involves the evaluation of alternatives, such as the selection of disposal technologies. Various scenarios can be evaluated in the model, so that the effects of, for example, using a thicker soil cap over the waste cell can be assessed. This ability to evaluate mitigation
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
Directory of Open Access Journals (Sweden)
Harry Budiman
2010-06-01
Full Text Available The evaluation of uncertainty measurement in the determination of Fe content in powdered tonic food drink using graphite furnace atomic absorption spectrometry was carried out. The specification of measurand, source of uncertainty, standard uncertainty, combined uncertainty and expanded uncertainty from this measurement were evaluated and accounted. The measurement result showed that the Fe content in powdered tonic food drink sample was 569.32 µg/5g, with the expanded uncertainty measurement ± 178.20 µg/5g (coverage factor, k = 2, at confidende level 95%. The calibration curve gave the major contribution to the uncertainty of the final results. Keywords: uncertainty, powdered tonic food drink, iron (Fe, graphite furnace AAS
Directory of Open Access Journals (Sweden)
Roger Hillson
Full Text Available This study demonstrates the use of bootstrap methods to estimate the total population of urban and periurban areas using satellite imagery and limited survey data. We conducted complete household surveys in 20 neighborhoods in the city of Bo, Sierra Leone, which collectively were home to 25,954 persons living in 1,979 residential structures. For five of those twenty sections, we quantized the rooftop areas of structures extracted from satellite images. We used bootstrap statistical methods to estimate the total population of the pooled sections, including the associated uncertainty intervals, as a function of sample size. Evaluations based either on rooftop area per person or on the mean number of occupants per residence both converged on the true population size. We demonstrate with this simulation that demographic surveys of a relatively small proportion of residences can provide a foundation for accurately estimating the total population in conjunction with aerial photographs.
Energy Technology Data Exchange (ETDEWEB)
Appling, J.W.; Pye, L.H. [Woodward-Clyde Consultants, Denver, CO (United States)
1994-12-31
Screening level risk assessments for both human health and ecological receptors conducted using chemical concentrations found at hot spots often overestimate exposure and risk when the exposure area is larger than the hot spot. Making reasonable remedial decisions is difficult when risk is overestimated. Alternate methods of estimating exposure concentrations such as averaging over the area of the site may over or underestimate exposure. The authors have developed a Monte Carlo application to simulate probable large herbivore exposures to stack emissions deposited on forage under different time scenarios. The method generates a distribution of probable exposure concentrations assuming the herbivore may wander off the unfenced site and combines this with literature-based distributions of forage intake and exposure area estimates. Application of the model to realistic data sets shows that under some circumstances, when hot spot concentration exceed trigger levels, it can be shown that actual exposures are not likely to exceed trigger levels, and that if exceedances occur, they are unlikely to have significant impacts on the exposed population. If risk is excessive, remedial alternatives can be evaluated to see if they achieve acceptable risk levels. The method potentially has wide application in human and ecological risk assessments when hot spots are smaller than exposure areas for either individuals or populations.
DEFF Research Database (Denmark)
Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio
2014-01-01
measurement processes, e.g., with tactile systems, also due to factors related to systematic errors, mainly caused by specific CT image characteristics. In this paper we propose a simulation-based framework for measurement uncertainty evaluation in dimensional CT using the bootstrap method. In a case study...
WORKSHOP TO EVALUATE UNCERTAINTIES IN THE ASSESSMENT OF THE IMPACTS OF GLOBAL CHANGE ON AIR QUALITY
A workshop will be conducted to evaluate approaches for characterizing, quantifying and communicating uncertainty when assessing global change effects on US air quality. The discussion focused on the US EPA ORD Global Research Program Air Quality assessment -- a complex, model-b...