WorldWideScience

Sample records for ii uncertainty estimation

  1. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  2. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  3. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  4. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    Science.gov (United States)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  5. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model.

    Science.gov (United States)

    Bolster, Carl H; Vadas, Peter A

    2013-07-01

    Models are often used to predict phosphorus (P) loss from agricultural fields. Although it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predictions of annual P loss by the Annual P Loss Estimator (APLE) model. Our objectives were (i) to conduct a sensitivity analyses for all APLE input variables to determine which variables the model is most sensitive to, (ii) to determine whether the relatively easy-to-implement first-order approximation (FOA) method provides accurate estimates of model prediction uncertainties by comparing results with the more accurate Monte Carlo simulation (MCS) method, and (iii) to evaluate the performance of the APLE model against measured P loss data when uncertainties in model predictions and measured data are included. Our results showed that for low to moderate uncertainties in APLE input variables, the FOA method yields reasonable estimates of model prediction uncertainties, although for cases where manure solid content is between 14 and 17%, the FOA method may not be as accurate as the MCS method due to a discontinuity in the manure P loss component of APLE at a manure solid content of 15%. The estimated uncertainties in APLE predictions based on assumed errors in the input variables ranged from ±2 to 64% of the predicted value. Results from this study highlight the importance of including reasonable estimates of model uncertainty when using models to predict P loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. The Uncertainty estimation of Alanine/ESR dosimetry

    International Nuclear Information System (INIS)

    Kim, Bo Rum; An, Jin Hee; Choi, Hoon; Kim, Young Ki

    2008-01-01

    Machinery, tools and cable etc are in the nuclear power plant which environment is very severe. By measuring actual dose, it needs for extending life expectancy of the machinery and tools and the cable. Therefore, we estimated on dose (gamma ray) of Wolsong nuclear power division 1 by dose estimation technology for three years. The dose estimation technology was secured by ESR(Electron Spin Resonance) dose estimation using regression analysis. We estimate uncertainty for secure a reliability of results. The uncertainty estimation will be able to judge the reliability of measurement results. The estimation of uncertainty referred the international unified guide in order; GUM(Guide to the Expression of Uncertainty in Measurement). It was published by International Standardization for Organization (ISO) in 1993. In this study the uncertainty of e-scan and EMX those are ESR equipment were evaluated and compared. Base on these results, it will improve the reliability of measurement

  7. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  8. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  9. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  10. Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.

    Directory of Open Access Journals (Sweden)

    Elise Payzan-LeNestour

    Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

  11. Uncertainty estimation of ultrasonic thickness measurement

    International Nuclear Information System (INIS)

    Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain

    2009-01-01

    The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)

  12. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  13. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  14. Estimates of Uncertainty around the RBA's Forecasts

    OpenAIRE

    Peter Tulip; Stephanie Wallace

    2012-01-01

    We use past forecast errors to construct confidence intervals and other estimates of uncertainty around the Reserve Bank of Australia's forecasts of key macroeconomic variables. Our estimates suggest that uncertainty about forecasts is high. We find that the RBA's forecasts have substantial explanatory power for the inflation rate but not for GDP growth.

  15. Uncertainty estimation of uranium determination in urine by fluorometry

    International Nuclear Information System (INIS)

    Shakhashiro, A.; Al-Khateeb, S.

    2003-11-01

    In this study an applicable mathematical model is proposed for the estimation of uncertainty in uranium determination by fluorometry in urine sample. The study based on EURACHEM guide for uncertainty estimation. This model was tested on a sample containing 0.02 μg/ml uranium, where calculated uncertainty was 0.007 μg/ml. The sources of uncertainty were shown on fish-bone plane as the following: In addition, the weight of each uncertainty parameter was shown in a histogram: Finally, it was found that the estimated uncertainty by the proposed model was 3 to 4 time more that the usually reported standard deviation. (author)

  16. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  17. Addressing Uncertainties in Cost Estimates for Decommissioning Nuclear Facilities

    International Nuclear Information System (INIS)

    Benjamin, Serge; Descures, Sylvain; Du Pasquier, Louis; Francois, Patrice; Buonarotti, Stefano; Mariotti, Giovanni; Tarakonov, Jurij; Daniska, Vladimir; Bergh, Niklas; Carroll, Simon; AaSTRoeM, Annika; Cato, Anna; De La Gardie, Fredrik; Haenggi, Hannes; Rodriguez, Jose; Laird, Alastair; Ridpath, Andy; La Guardia, Thomas; O'Sullivan, Patrick; ); Weber, Inge; )

    2017-01-01

    The cost estimation process of decommissioning nuclear facilities has continued to evolve in recent years, with a general trend towards demonstrating greater levels of detail in the estimate and more explicit consideration of uncertainties, the latter of which may have an impact on decommissioning project costs. The 2012 report on the International Structure for Decommissioning Costing (ISDC) of Nuclear Installations, a joint recommendation by the Nuclear Energy Agency (NEA), the International Atomic Energy Agency (IAEA) and the European Commission, proposes a standardised structure of cost items for decommissioning projects that can be used either directly for the production of cost estimates or for mapping of cost items for benchmarking purposes. The ISDC, however, provides only limited guidance on the treatment of uncertainty when preparing cost estimates. Addressing Uncertainties in Cost Estimates for Decommissioning Nuclear Facilities, prepared jointly by the NEA and IAEA, is intended to complement the ISDC, assisting cost estimators and reviewers in systematically addressing uncertainties in decommissioning cost estimates. Based on experiences gained in participating countries and projects, the report describes how uncertainty and risks can be analysed and incorporated in decommissioning cost estimates, while presenting the outcomes in a transparent manner

  18. Uncertainties in Organ Burdens Estimated from PAS

    International Nuclear Information System (INIS)

    La Bone, T.R.

    2004-01-01

    To calculate committed effective dose equivalent, one needs to know the quantity of the radionuclide in all significantly irradiated organs (the organ burden) as a function of time following the intake. There are two major sources of uncertainty in an organ burden estimated from personal air sampling (PAS) data: (1) The uncertainty in going from the exposure measured with the PAS to the quantity of aerosol inhaled by the individual, and (2) The uncertainty in going from the intake to the organ burdens at any given time, taking into consideration the biological variability of the biokinetic models from person to person (interperson variability) and in one person over time (intra-person variability). We have been using biokinetic modeling methods developed by researchers at the University of Florida to explore the impact of inter-person variability on the uncertainty of organ burdens estimated from PAS data. These initial studies suggest that the uncertainties are so large that PAS might be considered to be a qualitative (rather than quantitative) technique. These results indicate that more studies should be performed to properly classify the reliability and usefulness of using PAS monitoring data to estimate organ burdens, organ dose, and ultimately CEDE

  19. Uncertainty estimation and risk prediction in air quality

    International Nuclear Information System (INIS)

    Garaud, Damien

    2011-01-01

    This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr

  20. Estimation of the uncertainties considered in NPP PSA level 2

    International Nuclear Information System (INIS)

    Kalchev, B.; Hristova, R.

    2005-01-01

    The main approaches of the uncertainties analysis are presented. The sources of uncertainties which should be considered in PSA level 2 for WWER reactor such as: uncertainties propagated from level 1 PSA; uncertainties in input parameters; uncertainties related to the modelling of physical phenomena during the accident progression and uncertainties related to the estimation of source terms are defined. The methods for estimation of the uncertainties are also discussed in this paper

  1. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    International Nuclear Information System (INIS)

    Kirchner, G.; Peterson, R.

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  2. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  3. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  4. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  5. Assessing concentration uncertainty estimates from passive microwave sea ice products

    Science.gov (United States)

    Meier, W.; Brucker, L.; Miller, J. A.

    2017-12-01

    Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.

  6. Adult head CT scans: the uncertainties of effective dose estimates

    International Nuclear Information System (INIS)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.

    2008-01-01

    Full Text: CT scanning is a high dose imaging modality. Effective dose estimates from CT scans can provide important information to patients and medical professionals. For example, medical practitioners can use the dose to estimate the risk to the patient, and judge whether this risk is outweighed by the benefits of the CT examination, while radiographers can gauge the effect of different scanning protocols on the patient effective dose, and take this into consideration when establishing routine scan settings. Dose estimates also form an important part of epidemiological studies examining the health effects of medical radiation exposures on the wider population. Medical physicists have been devoting significant effort towards estimating patient radiation doses from diagnostic CT scans for some years. The question arises: How accurate are these effective dose estimates? The need for a greater understanding and improvement of the uncertainties in CT dose estimates is now gaining recognition as an important issue (BEIR VII 2006). This study is an attempt to analyse and quantify the uncertainty components relating to effective dose estimates from adult head CT examinations that are calculated with four commonly used methods. The dose estimation methods analysed are the Nagel method, the ImpaCT method, the Wellhoefer method and the Dose-Length Product (DLP) method. The analysis of the uncertainties was performed in accordance with the International Standards Organisation's Guide to the Expression of Uncertainty in Measurement as discussed in Gregory et al (Australas. Phys. Eng. Sci. Med., 28: 131-139, 2005). The uncertainty components vary, depending on the method used to derive the effective dose estimate. Uncertainty components in this study include the statistical and other errors from Monte Carlo simulations, uncertainties in the CT settings and positions of patients in the CT gantry, calibration errors from pencil ionization chambers, the variations in the organ

  7. Estimates of bias and uncertainty in recorded external dose

    International Nuclear Information System (INIS)

    Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.

    1994-10-01

    A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements

  8. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  9. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    Science.gov (United States)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability

  10. Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates

    International Nuclear Information System (INIS)

    Fenwick, John D.; Nahum, Alan E.

    2001-01-01

    A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding

  11. Uncertainty Evaluation of Best Estimate Calculation Results

    International Nuclear Information System (INIS)

    Glaeser, H.

    2006-01-01

    Efforts are underway in Germany to perform analysis using best estimate computer codes and to include uncertainty evaluation in licensing. The German Reactor Safety Commission (RSK) issued a recommendation to perform uncertainty analysis in loss of coolant accident safety analyses (LOCA), recently. A more general requirement is included in a draft revision of the German Nuclear Regulation which is an activity of the German Ministry of Environment and Reactor Safety (BMU). According to the recommendation of the German RSK to perform safety analyses for LOCA in licensing the following deterministic requirements have still to be applied: Most unfavourable single failure, Unavailability due to preventive maintenance, Break location, Break size and break type, Double ended break, 100 percent through 200 percent, Large, medium and small break, Loss of off-site power, Core power (at accident initiation the most unfavourable conditions and values have to be assumed which may occur under normal operation taking into account the set-points of integral power and power density control. Measurement and calibration errors can be considered statistically), Time of fuel cycle. Analysis using best estimate codes with evaluation of uncertainties is the only way to quantify conservatisms with regard to code models and uncertainties of plant, fuel parameters and decay heat. This is especially the case for approaching licensing limits, e.g. due to power up-rates, higher burn-up and higher enrichment. Broader use of best estimate analysis is therefore envisaged in the future. Since some deterministic unfavourable assumptions regarding availability of NPP systems are still used, some conservatism in best-estimate analyses remains. Methods of uncertainty analyses have been developed and applied by the vendor Framatome ANP as well as by GRS in Germany. The GRS development was sponsored by the German Ministry of Economy and Labour (BMWA). (author)

  12. Estimating uncertainty of data limited stock assessments

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro

    2017-01-01

    -limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...

  13. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  14. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    Science.gov (United States)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  15. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  16. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    Science.gov (United States)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  17. Uncertainty Estimate in Resources Assessment: A Geostatistical Contribution

    International Nuclear Information System (INIS)

    Souza, Luis Eduardo de; Costa, Joao Felipe C. L.; Koppe, Jair C.

    2004-01-01

    For many decades the mining industry regarded resources/reserves estimation and classification as a mere calculation requiring basic mathematical and geological knowledge. Most methods were based on geometrical procedures and spatial data distribution. Therefore, uncertainty associated with tonnages and grades either were ignored or mishandled, although various mining codes require a measure of confidence in the values reported. Traditional methods fail in reporting the level of confidence in the quantities and grades. Conversely, kriging is known to provide the best estimate and its associated variance. Among kriging methods, Ordinary Kriging (OK) probably is the most widely used one for mineral resource/reserve estimation, mainly because of its robustness and its facility in uncertainty assessment by using the kriging variance. It also is known that OK variance is unable to recognize local data variability, an important issue when heterogeneous mineral deposits with higher and poorer grade zones are being evaluated. Alternatively, stochastic simulation are used to build local or global uncertainty about a geological attribute respecting its statistical moments. This study investigates methods capable of incorporating uncertainty to the estimates of resources and reserves via OK and sequential gaussian and sequential indicator simulation The results showed that for the type of mineralization studied all methods classified the tonnages similarly. The methods are illustrated using an exploration drill hole data sets from a large Brazilian coal deposit

  18. Uncertainty estimates for theoretical atomic and molecular data

    International Nuclear Information System (INIS)

    Chung, H-K; Braams, B J; Bartschat, K; Császár, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering. (topical review)

  19. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    Science.gov (United States)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  20. Uncertainty estimation in nuclear power plant probabilistic safety assessment

    International Nuclear Information System (INIS)

    Guarro, S.B.; Cummings, G.E.

    1989-01-01

    Probabilistic Risk Assessment (PRA) was introduced in the nuclear industry and the nuclear regulatory process in 1975 with the publication of the Reactor Safety Study by the U.S. Nuclear Regulatory Commission. Almost fifteen years later, the state-of-the-art in this field has been expanded and sharpened in many areas, and about thirty-five plant-specific PRAs (Probabilistic Risk Assessments) have been performed by the nuclear utility companies or by the U.S. Nuclear Regulatory commission. Among the areas where the most evident progress has been made in PRA and PSA (Probabilistic Safety Assessment, as these studies are more commonly referred to in the international community outside the U.S.) is the development of a consistent framework for the identification of sources of uncertainty and the estimation of their magnitude as it impacts various risk measures. Techniques to propagate uncertainty in reliability data through the risk models and display its effect on the top level risk estimates were developed in the early PRAs. The Seismic Safety Margin Research Program (SSMRP) study was the first major risk study to develop an approach to deal explicitly with uncertainty in risk estimates introduced not only by uncertainty in component reliability data, but by the incomplete state of knowledge of the assessor(s) with regard to basic phenomena that may trigger and drive a severe accident. More recently NUREG-1150, another major study of reactor risk sponsored by the NRC, has expanded risk uncertainty estimation and analysis into the realm of model uncertainty related to the relatively poorly known post-core-melt phenomena which determine the behavior of the molten core and of the rector containment structures

  1. Proof of concept and dose estimation with binary responses under model uncertainty.

    Science.gov (United States)

    Klingenberg, B

    2009-01-30

    This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.

  2. The estimation of uncertainty of radioactivity measurement on gamma counters in radiopharmacy

    International Nuclear Information System (INIS)

    Jovanovic, M.S.; Orlic, M.; Vranjes, S.; Stamenkovic, Lj. . E-mail address of corresponding author: nikijov@vin.bg.ac.yu; Jovanovic, M.S.)

    2005-01-01

    In this paper the estimation of uncertainty of measurement of radioactivity on gamma counter in Laboratory for radioisotopes is presented. The uncertainty components, which are important for these measurements, are identified and taken into account while estimating the uncertainty of measurement.(author)

  3. Estimating uncertainty in multivariate responses to selection.

    Science.gov (United States)

    Stinchcombe, John R; Simonsen, Anna K; Blows, Mark W

    2014-04-01

    Predicting the responses to natural selection is one of the key goals of evolutionary biology. Two of the challenges in fulfilling this goal have been the realization that many estimates of natural selection might be highly biased by environmentally induced covariances between traits and fitness, and that many estimated responses to selection do not incorporate or report uncertainty in the estimates. Here we describe the application of a framework that blends the merits of the Robertson-Price Identity approach and the multivariate breeder's equation to address these challenges. The approach allows genetic covariance matrices, selection differentials, selection gradients, and responses to selection to be estimated without environmentally induced bias, direct and indirect selection and responses to selection to be distinguished, and if implemented in a Bayesian-MCMC framework, statistically robust estimates of uncertainty on all of these parameters to be made. We illustrate our approach with a worked example of previously published data. More generally, we suggest that applying both the Robertson-Price Identity and the multivariate breeder's equation will facilitate hypothesis testing about natural selection, genetic constraints, and evolutionary responses. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  4. Aboveground Forest Biomass Estimation with Landsat and LiDAR Data and Uncertainty Analysis of the Estimates

    Directory of Open Access Journals (Sweden)

    Dengsheng Lu

    2012-01-01

    Full Text Available Landsat Thematic mapper (TM image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.

  5. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  6. On the uncertainties in effective dose estimates of adult CT head scans

    International Nuclear Information System (INIS)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.

    2008-01-01

    Estimates of the effective dose to adult patients from computed tomography (CT) head scanning can be calculated using a number of different methods. These estimates can be used for a variety of purposes, such as improving scanning protocols, comparing different CT imaging centers, and weighing the benefits of the scan against the risk of radiation-induced cancer. The question arises: What is the uncertainty in these effective dose estimates? This study calculates the uncertainty of effective dose estimates produced by three computer programs (CT-EXPO, CTDosimetry, and ImpactDose) and one method that makes use of dose-length product (DLP) values. Uncertainties were calculated in accordance with an internationally recognized uncertainty analysis guide. For each of the four methods, the smallest and largest overall uncertainties (stated at the 95% confidence interval) were: 20%-31% (CT-EXPO), 15%-28% (CTDosimetry), 20%-36% (ImpactDose), and 22%-32% (DLP), respectively. The overall uncertainties for each method vary due to differences in the uncertainties of factors used in each method. The smallest uncertainties apply when the CT dose index for the scanner has been measured using a calibrated pencil ionization chamber

  7. Uncertainty Estimates: A New Editorial Standard

    International Nuclear Information System (INIS)

    Drake, Gordon W.F.

    2014-01-01

    Full text: The objective of achieving higher standards for uncertainty estimates in the publication of theoretical data for atoms and molecules requires a concerted effort by both the authors of papers and the editors who send them out for peer review. In April, 2011, the editors of Physical Review A published an Editorial announcing a new standard that uncertainty estimates would be required whenever practicable, and in particular in the following circumstances: 1. If the authors claim high accuracy, or improvements on the accuracy of previous work. 2. If the primary motivation for the paper is to make comparisons with present or future high precision experimental measurements. 3. If the primary motivation is to provide interpolations or extrapolations of known experimental measurements. The new policy means that papers that do not meet these standards are not sent out for peer review until they have been suitably revised, and the authors are so notified immediately upon receipt. The policy has now been in effect for three years. (author

  8. Guidelines for uncertainty analysis developed for the participants in the BIOMOVS II study

    International Nuclear Information System (INIS)

    Baeverstam, U.; Davis, P.; Garcia-Olivares, A.; Henrich, E.; Koch, J.

    1993-07-01

    This report has been produced to provide guidelines for uncertainty analysis for use by participants in the BIOMOVS II study. It is hoped that others with an interest in modelling contamination in the biosphere will also find it useful. The report has been prepared by members of the Uncertainty and Validation Working Group and has been reviewed by other BIOMOVS II participants. The opinions expressed are those of the authors and should not be taken to represent the views of the BIOMOVS II sponsors or other BIOMOVS Il participating organisations

  9. Guidelines for uncertainty analysis developed for the participants in the BIOMOVS II study

    Energy Technology Data Exchange (ETDEWEB)

    Baeverstam, U; Davis, P; Garcia-Olivares, A; Henrich, E; Koch, J

    1993-07-01

    This report has been produced to provide guidelines for uncertainty analysis for use by participants in the BIOMOVS II study. It is hoped that others with an interest in modelling contamination in the biosphere will also find it useful. The report has been prepared by members of the Uncertainty and Validation Working Group and has been reviewed by other BIOMOVS II participants. The opinions expressed are those of the authors and should not be taken to represent the views of the BIOMOVS II sponsors or other BIOMOVS Il participating organisations.

  10. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  11. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Science.gov (United States)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  12. Incorporation of various uncertainties in dependent failure-probability estimation

    International Nuclear Information System (INIS)

    Samanta, P.K.; Mitra, S.P.

    1982-01-01

    This paper describes an approach that allows the incorporation of various types of uncertainties in the estimation of dependent failure (common mode failure) probability. The types of uncertainties considered are attributable to data, modeling and coupling. The method developed is applied to a class of dependent failures, i.e., multiple human failures during testing, maintenance and calibration. Estimation of these failures is critical as they have been shown to be significant contributors to core melt probability in pressurized water reactors

  13. Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Sadeghi Sarcheshmah

    2012-01-01

    Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.

  14. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  15. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    Science.gov (United States)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss

  16. REDD+ emissions estimation and reporting: dealing with uncertainty

    International Nuclear Information System (INIS)

    Pelletier, Johanne; Potvin, Catherine; Martin, Davy

    2013-01-01

    The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology

  17. REDD+ emissions estimation and reporting: dealing with uncertainty

    Science.gov (United States)

    Pelletier, Johanne; Martin, Davy; Potvin, Catherine

    2013-09-01

    The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology

  18. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  19. Estimation of the uncertainty in wind power forecasting

    International Nuclear Information System (INIS)

    Pinson, P.

    2006-03-01

    WIND POWER experiences a tremendous development of its installed capacities in Europe. Though, the intermittence of wind generation causes difficulties in the management of power systems. Also, in the context of the deregulation of electricity markets, wind energy is penalized by its intermittent nature. It is recognized today that the forecasting of wind power for horizons up to 2/3-day ahead eases the integration of wind generation. Wind power forecasts are traditionally provided in the form of point predictions, which correspond to the most-likely power production for a given horizon. That sole information is not sufficient for developing optimal management or trading strategies. Therefore, we investigate on possible ways for estimating the uncertainty of wind power forecasts. The characteristics of the prediction uncertainty are described by a thorough study of the performance of some of the state-of-the-art approaches, and by underlining the influence of some variables e.g. level of predicted power on distributions of prediction errors. Then, a generic method for the estimation of prediction intervals is introduced. This statistical method is non-parametric and utilizes fuzzy logic concepts for integrating expertise on the prediction uncertainty characteristics. By estimating several prediction intervals at once, one obtains predictive distributions of wind power output. The proposed method is evaluated in terms of its reliability, sharpness and resolution. In parallel, we explore the potential use of ensemble predictions for skill forecasting. Wind power ensemble forecasts are obtained either by converting meteorological ensembles (from ECMWF and NCEP) to power or by applying a poor man's temporal approach. A proposal for the definition of prediction risk indices is given, reflecting the disagreement between ensemble members over a set of successive look-ahead times. Such prediction risk indices may comprise a more comprehensive signal on the expected level

  20. Uncertainty of feedback and state estimation determines the speed of motor adaptation

    Directory of Open Access Journals (Sweden)

    Kunlin Wei

    2010-05-01

    Full Text Available Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.

  1. Random Forests (RFs) for Estimation, Uncertainty Prediction and Interpretation of Monthly Solar Potential

    Science.gov (United States)

    Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis

    2017-04-01

    Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all

  2. Uncertainty estimation in nuclear material weighing

    Energy Technology Data Exchange (ETDEWEB)

    Thaure, Bernard [Institut de Radioprotection et de Surete Nucleaire, Fontenay aux Roses, (France)

    2011-12-15

    The assessment of nuclear material quantities located in nuclear plants requires knowledge of additions and subtractions of amounts of different types of materials. Most generally, the quantity of nuclear material held is deduced from 3 parameters: a mass (or a volume of product); a concentration of nuclear material in the product considered; and an isotopic composition. Global uncertainties associated with nuclear material quantities depend upon the confidence level of results obtained in the measurement of every different parameter. Uncertainties are generally estimated by considering five influencing parameters (ISHIKAWA's rule): the material itself; the measurement system; the applied method; the environmental conditions; and the operator. A good practice guide, to be used to deal with weighing errors and problems encountered, is presented in the paper.

  3. Uncertainties in fatal cancer risk estimates used in radiation protection

    International Nuclear Information System (INIS)

    Kai, Michiaki

    1999-01-01

    Although ICRP and NCRP had not described the details of uncertainties in cancer risk estimates in radiation protection, NCRP, in 1997, firstly reported the results of uncertainty analysis (NCRP No.126) and which is summarized in this paper. The NCRP report pointed out that there are following five factors which uncertainty possessing: uncertainty in epidemiological studies, in dose assessment, in transforming the estimates to risk assessment, in risk prediction and in extrapolation to the low dose/dose rate. These individual factors were analyzed statistically to obtain the relationship between the probability of cancer death in the US population and life time risk coefficient (% per Sv), which showed that, for the latter, the mean value was 3.99 x 10 -2 /Sv, median, 3.38 x 10 -2 /Sv, GSD (geometrical standard deviation), 1.83 x 10 -2 /Sv and 95% confidential limit, 1.2-8.84 x 10 -2 /Sv. The mean value was smaller than that of ICRP recommendation (5 x 10 -2 /Sv), indicating that the value has the uncertainty factor of 2.5-3. Moreover, the most important factor was shown to be the uncertainty in DDREF (dose/dose rate reduction factor). (K.H.)

  4. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  5. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  6. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  7. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  8. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  9. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  10. Uncertainty in exposure of underground miners to radon daughters and the effect of uncertainty on risk estimates

    International Nuclear Information System (INIS)

    1989-10-01

    Studies of underground miners provide the principal basis for assessing the risk from radon daughter exposure. An important problem in all epidemiological studies of underground miners is the reliability of the estimates of the miners' exposures. This study examines the various sources of uncertainty in exposure estimation for the principal epidemiologic studies reported in the literature including the temporal and spatial variability of radon sources and, with the passage of time, changes to both mining methods and ventilation conditions. Uncertainties about work histories and the role of other hard rock mining experience are also discussed. The report also describes two statistical approaches, both based on Bayesian methods, by which the effects on the estimated risk coefficient of uncertainty in exposure (WLM) can be examined. One approach requires only an estimate of the cumulative WLM exposure of a group of miners, an estimate of the number of (excess) lung cancers potentially attributable to that exposure, and a specification of the uncertainty about the cumulative exposure of the group. The second approach is based on a linear regression model which incorporates errors (uncertainty) in the independent variable (WLM) and allows the dependent variable (cases) to be Poisson distributed. The method permits the calculation of marginal probability distributions for either slope (risk coefficient) or intercept. The regression model approach is applied to several published data sets from epidemiological studies of miners. Specific results are provided for each data set and apparent differences in risk coefficients are discussed. The studies of U.S. uranium miners, Ontario uranium miners and Czechoslovakian uranium miners are argued to provide the best basis for risk estimation at this time. In general terms, none of the analyses performed are inconsistent with a linear exposure-effect relation. Based on analyses of the overall miner groups, the most likely ranges

  11. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  12. Major Results of the OECD BEMUSE (Best Estimate Methods; Uncertainty and Sensitivity Evaluation) Programme

    International Nuclear Information System (INIS)

    Reventos, F.

    2008-01-01

    One of the goals of computer code models of Nuclear Power Plants (NPP) is to demonstrate that these are designed to respond safely at postulated accidents. Models and codes are an approximation of the real physical behaviour occurring during a hypothetical transient and the data used to build these models are also known with certain accuracy. Therefore code predictions are uncertain. The BEMUSE programme is focussed on the application of uncertainty methodologies to large break LOCAs. The programme intends to evaluate the practicability, quality and reliability of best-estimate methods including uncertainty evaluations in applications relevant to nuclear reactor safety, to develop common understanding and to promote/facilitate their use by the regulator bodies and the industry. In order to fulfil its objectives BEMUSE is organized in to steps and six phases. The first step is devoted to the complete analysis of a LB-LOCA (L2-5) in an experimental facility (LOFT) while the second step refers to an actual Nuclear Power Plant. Both steps provide results on thermalhydraulic Best Estimate simulation as well as Uncertainty and sensitivity evaluation. At the time this paper is prepared, phases I, II and III are fully completed and the corresponding reports have been issued. Phase IV draft report is by now being reviewed while participants are working on Phase V developments. Phase VI consists in preparing the final status report which will summarizes the most relevant results of the whole programme.

  13. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    Science.gov (United States)

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  14. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  15. Uncertainty estimation of shape and roughness measurement

    NARCIS (Netherlands)

    Morel, M.A.A.

    2006-01-01

    One of the most common techniques to measure a surface or form is mechanical probing. Although used since the early 30s of the 20th century, a method to calculate a task specific uncertainty budget was not yet devised. Guidelines and statistical estimates are common in certain cases but an

  16. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  17. Uncertainty Estimation of Neutron Activation Analysis in Zinc Elemental Determination in Food Samples

    International Nuclear Information System (INIS)

    Endah Damastuti; Muhayatun; Diah Dwiana L

    2009-01-01

    Beside to complished the requirements of international standard of ISO/IEC 17025:2005, uncertainty estimation should be done to increase quality and confidence of analysis results and also to establish traceability of the analysis results to SI unit. Neutron activation analysis is a major technique used by Radiometry technique analysis laboratory and is included as scope of accreditation under ISO/IEC 17025:2005, therefore uncertainty estimation of neutron activation analysis is needed to be carried out. Sample and standard preparation as well as, irradiation and measurement using gamma spectrometry were the main activities which could give contribution to uncertainty. The components of uncertainty sources were specifically explained. The result of expanded uncertainty was 4,0 mg/kg with level of confidence 95% (coverage factor=2) and Zn concentration was 25,1 mg/kg. Counting statistic of cuplikan and standard were the major contribution of combined uncertainty. The uncertainty estimation was expected to increase the quality of the analysis results and could be applied further to other kind of samples. (author)

  18. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  19. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  20. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  1. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Leaf area index uncertainty estimates for model-data fusion applications

    Science.gov (United States)

    Andrew D. Richardson; D. Bryan Dail; D.Y. Hollinger

    2011-01-01

    Estimates of data uncertainties are required to integrate different observational data streams as model constraints using model-data fusion. We describe an approach with which random and systematic uncertainties in optical measurements of leaf area index [LAI] can be quantified. We use data from a measurement campaign at the spruce-dominated Howland Forest AmeriFlux...

  3. Effect of Uncertainties in Physical Property Estimates on Process Design - Sensitivity Analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Jones, Mark Nicholas; Sin, Gürkan

    for performing sensitivity of process design subject to uncertainties in the property estimates. To this end, first uncertainty analysis of the property models of pure components and their mixtures was performed in order to obtain the uncertainties in the estimated property values. As a next step, sensitivity......Chemical process design calculations require accurate and reliable physical and thermodynamic property data and property models of pure components and their mixtures in order to obtain reliable design parameters which help to achieve desired specifications. The uncertainties in the property values...... can arise from the experiments itself or from the property models employed. It is important to consider the effect of these uncertainties on the process design in order to assess the quality and reliability of the final design. The main objective of this work is to develop a systematic methodology...

  4. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    Directory of Open Access Journals (Sweden)

    Janet L. Rachlow

    2013-08-01

    Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  5. Assessment of uncertainties of external dose estimation after the Chernobyl accident

    International Nuclear Information System (INIS)

    Kruk, Julianna

    2008-01-01

    Full text: In the remote period of time after the Chernobyl accident the estimation of an external exposure with using of direct dose rate measurements or individual monitoring of inhabitants is rationally only for settlements where the preliminary estimation makes the range equal or greater 1.0 mSv per year. For inhabitancies of settlements where the preliminary estimation makes the range less 1.0 mSv per year the external dose is correctly to estimate by calculation. For the last cases the uncertainty should be assessed. The most accessible initial parameter for calculation of a dose of an external exposure is the average ground deposition of Cs-137 for the settlements. The character of density distribution of Cs-137 deposition in an area of one settlement is well enough studied. The best agreement of distribution of this parameter is reached with log-normal distribution practically for all settlements of the investigated territories with factor of a variation 0.3-0.6 and the standard geometrical deviation lying within the limits of 1.4-1.7. The dose factors which correspond to the structure of an available housing of settlement (type of apartment houses: wooden, stone, multi-storey) and age structure of the population are bring the main contribution into uncertainty of the external dose estimation. The situations with a different level of known information have been considered for the estimation of influence of those parameters on the general uncertainty. Thus the estimation of the uncertainty of the external dose was done for two variant: optimistic and pessimistic. In the optimistic case the estimation of external doses will be spent for specific settlement with known structure of housing and according to a known share of the living population in houses of the certain type. In that case, variability value dose factor will be limited to the chosen type of a residential building (for example - the one-storied wooden house), and a share of the living population

  6. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    Science.gov (United States)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from

  7. Uncertainty analysis methods for estimation of reliability of passive system of VHTR

    International Nuclear Information System (INIS)

    Han, S.J.

    2012-01-01

    An estimation of reliability of passive system for the probabilistic safety assessment (PSA) of a very high temperature reactor (VHTR) is under development in Korea. The essential approach of this estimation is to measure the uncertainty of the system performance under a specific accident condition. The uncertainty propagation approach according to the simulation of phenomenological models (computer codes) is adopted as a typical method to estimate the uncertainty for this purpose. This presentation introduced the uncertainty propagation and discussed the related issues focusing on the propagation object and its surrogates. To achieve a sufficient level of depth of uncertainty results, the applicability of the propagation should be carefully reviewed. For an example study, Latin-hypercube sampling (LHS) method as a direct propagation was tested for a specific accident sequence of VHTR. The reactor cavity cooling system (RCCS) developed by KAERI was considered for this example study. This is an air-cooled type passive system that has no active components for its operation. The accident sequence is a low pressure conduction cooling (LPCC) accident that is considered as a design basis accident for the safety design of VHTR. This sequence is due to a large failure of the pressure boundary of the reactor system such as a guillotine break of coolant pipe lines. The presentation discussed the obtained insights (benefit and weakness) to apply an estimation of reliability of passive system

  8. Uncertainties in effective dose estimates of adult CT head scans: The effect of head size

    International Nuclear Information System (INIS)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.

    2009-01-01

    Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.

  9. Uncertainties in effective dose estimates of adult CT head scans: The effect of head size

    Energy Technology Data Exchange (ETDEWEB)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E. [Department of Medical Physics, Royal Adelaide Hospital, Adelaide, South Australia 5000 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Division of Medical Imaging, Women' s and Children' s Hospital, North Adelaide, South Australia 5006 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia)

    2009-09-15

    Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.

  10. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    DEFF Research Database (Denmark)

    Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.

    2012-01-01

    site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level...... the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We...... propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same...

  11. Quantifying phenomenological importance in best-estimate plus uncertainty analyses

    International Nuclear Information System (INIS)

    Martin, Robert P.

    2009-01-01

    This paper describes a general methodology for quantifying the importance of specific phenomenological elements to analysis measures evaluated from non-parametric best-estimate plus uncertainty evaluation methodologies. The principal objective of an importance analysis is to reveal those uncertainty contributors having the greatest influence on key analysis measures. This characterization supports the credibility of the uncertainty analysis, the applicability of the analytical tools, and even the generic evaluation methodology through the validation of the engineering judgments that guided the evaluation methodology development. A demonstration of the importance analysis is provided using data from a sample problem considered in the development of AREVA's Realistic LBLOCA methodology. The results are presented against the original large-break LOCA Phenomena Identification and Ranking Table developed by the Technical Program Group responsible for authoring the Code Scaling, Applicability and Uncertainty methodology. (author)

  12. Epistemic uncertainties when estimating component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Mavko, B.; Kljenak, I.

    2000-01-01

    A method for specific estimation of a component failure rate, based on specific quantitative and qualitative data other than component failures, was developed and is described in the proposed paper. The basis of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas likelihood is built using fuzzy logic theory. With the proposed method, the component failure rate estimation is based on a much larger quantity of information compared to the presently used classical methods. Consequently, epistemic uncertainties, which are caused by lack of knowledge about a component or phenomenon are reduced. (author)

  13. Uncertainties in Steric Sea Level Change Estimation During the Satellite Altimeter Era: Concepts and Practices

    Science.gov (United States)

    MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.

    2017-01-01

    This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.

  14. Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories

    International Nuclear Information System (INIS)

    WILMARTH, S.R.

    2000-01-01

    This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory

  15. Estimation of Model Uncertainties in Closed-loop Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    This paper describe a method for estimation of parameters or uncertainties in closed-loop systems. The method is based on an application of the dual YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all systems stabilized by a given controller. The dual YJBK transfer function...

  16. Review of best estimate plus uncertainty methods of thermal-hydraulic safety analysis

    International Nuclear Information System (INIS)

    Prosek, A.; Mavko, B.

    2003-01-01

    In 1988 United States Nuclear Regulatory Commission approved the revised rule on the acceptance of emergency core cooling system (ECCS) performance. Since that there has been significant interest in the development of codes and methodologies for best-estimate loss-of-coolant accident (LOCAs) analyses. Several new best estimate plus uncertainty methods (BEPUs) were developed in the world. The purpose of the paper is to review the developments in the direction of best estimate approaches with uncertainty quantification and to discuss the problems in practical applications of BEPU methods. In general, the licensee methods are following original methods. The study indicated that uncertainty analysis with random sampling of input parameters and the use of order statistics for desired tolerance limits of output parameters is today commonly accepted and mature approach. (author)

  17. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    Science.gov (United States)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  18. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  19. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision

  20. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  1. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  2. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    Energy Technology Data Exchange (ETDEWEB)

    Poppeliers, Christian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In this report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.

  3. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  4. Estimating annual bole biomass production using uncertainty analysis

    Science.gov (United States)

    Travis J. Woolley; Mark E. Harmon; Kari B. O' Connell

    2007-01-01

    Two common sampling methodologies coupled with a simple statistical model were evaluated to determine the accuracy and precision of annual bole biomass production (BBP) and inter-annual variability estimates using this type of approach. We performed an uncertainty analysis using Monte Carlo methods in conjunction with radial growth core data from trees in three Douglas...

  5. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  6. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  7. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  8. The influence of climate change on flood risks in France - first estimates and uncertainty analysis

    Science.gov (United States)

    Dumas, P.; Hallegatte, S.; Quintana-Seguì, P.; Martin, E.

    2013-03-01

    This paper proposes a methodology to project the possible evolution of river flood damages due to climate change, and applies it to mainland France. Its main contributions are (i) to demonstrate a methodology to investigate the full causal chain from global climate change to local economic flood losses; (ii) to show that future flood losses may change in a very significant manner over France; (iii) to show that a very large uncertainty arises from the climate downscaling technique, since two techniques with comparable skills at reproducing reference river flows give very different estimates of future flows, and thus of future local losses. The main conclusion is thus that estimating future flood losses is still out of reach, especially at local scale, but that future national-scale losses may change significantly over this century, requiring policy changes in terms of risk management and land-use planning.

  9. Uncertainties Involved in the Iopospheric Conductivity Estimation

    Directory of Open Access Journals (Sweden)

    Young-Sil Kwak

    2002-12-01

    Full Text Available Various uncertainties involved in ionospheric conductivity estimation utilizing the electron density profile obtained from the Sondrestrom incoherent scatter radar are examined. First, we compare the conductivity which is based on raw electron density and the one based on corrected electron density that takes into account the effects of the difference between the electron and ion temperatures and the Debye length. The corrected electron density yields higher Pedersen and Hall conductivities than the raw electron density does. Second, the dependence of collision frequency model on the conductivity estimation is examined. Below 110 km conductivity does not depend significantly on collision frequency models. Above 110 km, however, the collision models affect the conductivity estimation. Third, the influence of the electron and ion temperatures on the conductivity estimation is examined. Electron and ion temperatures carrying an error of about 10% do not seem to affect significantly the conductivity estimation. Fourth, also examined is the effect of the choice of the altitude range of integration in calculating the height-integrated conductivity, conductance. It has been demonstrated that the lower and upper boundaries of the integration are quite sensitive to the estimation of the Hall and Pedersen conductances, respectively.

  10. Added Value of uncertainty Estimates of SOurce term and Meteorology (AVESOME)

    DEFF Research Database (Denmark)

    Sørensen, Jens Havskov; Schönfeldt, Fredrik; Sigg, Robert

    In the early phase of a nuclear accident, two large sources of uncertainty exist: one related to the source term and one associated with the meteorological data. Operational methods are being developed in AVESOME for quantitative estimation of uncertainties in atmospheric dispersion prediction.......g. at national meteorological services, the proposed methodology is feasible for real-time use, thereby adding value to decision support. In the recent NKS-B projects MUD, FAUNA and MESO, the implications of meteorological uncertainties for nuclear emergency preparedness and management have been studied...... uncertainty in atmospheric dispersion model forecasting stemming from both the source term and the meteorological data is examined. Ways to implement the uncertainties of forecasting in DSSs, and the impacts on real-time emergency management are described. The proposed methodology allows for efficient real...

  11. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  12. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  13. Uncertainty estimates for predictions of the impact of breeder-reactor radionuclide releases

    International Nuclear Information System (INIS)

    Miller, C.W.; Little, C.A.

    1982-01-01

    This paper summarizes estimates, compiled in a larger report, of the uncertainty associated with models and parameters used to assess the impact on man radionuclide releases to the environment by breeder reactor facilities. These estimates indicate that, for many sites, generic models and representative parameter values may reasonably be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under such circumstances. However, even using site-specific information, inherent natural variability within human receptors, and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose following short-term releases

  14. Estimation of uncertainty in pKa values determined by potentiometric titration.

    Science.gov (United States)

    Koort, Eve; Herodes, Koit; Pihl, Viljar; Leito, Ivo

    2004-06-01

    A procedure is presented for estimation of uncertainty in measurement of the pK(a) of a weak acid by potentiometric titration. The procedure is based on the ISO GUM. The core of the procedure is a mathematical model that involves 40 input parameters. A novel approach is used for taking into account the purity of the acid, the impurities are not treated as inert compounds only, their possible acidic dissociation is also taken into account. Application to an example of practical pK(a) determination is presented. Altogether 67 different sources of uncertainty are identified and quantified within the example. The relative importance of different uncertainty sources is discussed. The most important source of uncertainty (with the experimental set-up of the example) is the uncertainty of pH measurement followed by the accuracy of the burette and the uncertainty of weighing. The procedure gives uncertainty separately for each point of the titration curve. The uncertainty depends on the amount of titrant added, being lowest in the central part of the titration curve. The possibilities of reducing the uncertainty and interpreting the drift of the pK(a) values obtained from the same curve are discussed.

  15. Uncertainty related to Environmental Data and Estimated Extreme Events

    DEFF Research Database (Denmark)

    Burcharth, H. F.

    The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertainties...... including those corresponding to extreme estimates typically used for design purposes. Basically a design condition is made up of a set of parameter values stemming from several environmental parameters. To be able to evaluate the uncertainty related to design states one must know the corresponding joint....... Consequently this report deals mainly with each parameter separately. Multi parameter problems are briefly discussed in section 9. It is important to notice that the quantified uncertainties reported in section 7.7 represent what might be regarded as typical figures to be used only when no more qualified...

  16. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  17. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    International Nuclear Information System (INIS)

    Rypdal, Kristin; Baritz, Rainer

    2002-01-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO 2 carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  18. Estimating uncertainty in subsurface glider position using transmissions from fixed acoustic tomography sources.

    Science.gov (United States)

    Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S

    2013-10-01

    Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.

  19. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  20. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  1. Uncertainty analysis in estimating Japanese ingestion of global fallout Cs-137 using health risk evaluation model

    International Nuclear Information System (INIS)

    Shimada, Yoko; Morisawa, Shinsuke

    1998-01-01

    Most of model estimation of the environmental contamination includes some uncertainty associated with the parameter uncertainty in the model. In this study, the uncertainty was analyzed in a model for evaluating the ingestion of radionuclide caused by the long-term global low-level radioactive contamination by using various uncertainty analysis methods: the percentile estimate, the robustness analysis and the fuzzy estimate. The model is mainly composed of five sub-models, which include their own uncertainty; we also analyzed the uncertainty. The major findings obtained in this study include that the possibility of the discrepancy between predicted value by the model simulation and the observed data is less than 10%; the uncertainty of the predicted value is higher before 1950 and after 1980; the uncertainty of the predicted value can be reduced by decreasing the uncertainty of some environmental parameters in the model; the reliability of the model can definitively depend on the following environmental factors: direct foliar absorption coefficient, transfer factor of radionuclide from stratosphere down to troposphere, residual rate by food processing and cooking, transfer factor of radionuclide in ocean and sedimentation in ocean. (author)

  2. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy

    DEFF Research Database (Denmark)

    Laugaard Lorenzen, Ebbe; Brink, Carsten; Taylor, Carolyn W.

    2016-01-01

    BACKGROUND AND PURPOSE: We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. MATERIAL AND METHODS: Three tangential radiotherapy regimens were reconstructed using CT......-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. RESULTS: For left-sided breast cancer, mean...... to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always

  3. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  4. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration

    Science.gov (United States)

    Yang, Ming; Zhu, X. Ronald; Park, Peter C.; Titt, Uwe; Mohan, Radhe; Virshup, Gary; Clayton, James E.; Dong, Lei

    2012-07-01

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0-3.4%, primarily because soft tissue is the dominant tissue type in the human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction.

  5. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type

    International Nuclear Information System (INIS)

    Alva N, J.

    2010-01-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  6. On the evaluation of uncertainties for state estimation with the Kalman filter

    International Nuclear Information System (INIS)

    Eichstädt, S; Makarava, N; Elster, C

    2016-01-01

    The Kalman filter is an established tool for the analysis of dynamic systems with normally distributed noise, and it has been successfully applied in numerous areas. It provides sequentially calculated estimates of the system states along with a corresponding covariance matrix. For nonlinear systems, the extended Kalman filter is often used. This is derived from the Kalman filter by linearization around the current estimate. A key issue in metrology is the evaluation of the uncertainty associated with the Kalman filter state estimates. The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) and its supplements serve as the de facto standard for uncertainty evaluation in metrology. We explore the relationship between the covariance matrix produced by the Kalman filter and a GUM-compliant uncertainty analysis. In addition, the results of a Bayesian analysis are considered. For the case of linear systems with known system matrices, we show that all three approaches are compatible. When the system matrices are not precisely known, however, or when the system is nonlinear, this equivalence breaks down and different results can then be reached. For precisely known nonlinear systems, though, the result of the extended Kalman filter still corresponds to the linearized uncertainty propagation of the GUM. The extended Kalman filter can suffer from linearization and convergence errors. These disadvantages can be avoided to some extent by applying Monte Carlo procedures, and we propose such a method which is GUM-compliant and can also be applied online during the estimation. We illustrate all procedures in terms of a 2D dynamic system and compare the results with those obtained by particle filtering, which has been proposed for the approximate calculation of a Bayesian solution. Finally, we give some recommendations based on our findings. (paper)

  7. Inverse modeling and uncertainty analysis of potential groundwater recharge to the confined semi-fossil Ohangwena II Aquifer, Namibia

    Science.gov (United States)

    Wallner, Markus; Houben, Georg; Lohe, Christoph; Quinger, Martin; Himmelsbach, Thomas

    2017-12-01

    The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.

  8. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  9. Incorporating uncertainty analysis into life cycle estimates of greenhouse gas emissions from biomass production

    International Nuclear Information System (INIS)

    Johnson, David R.; Willis, Henry H.; Curtright, Aimee E.; Samaras, Constantine; Skone, Timothy

    2011-01-01

    Before further investments are made in utilizing biomass as a source of renewable energy, both policy makers and the energy industry need estimates of the net greenhouse gas (GHG) reductions expected from substituting biobased fuels for fossil fuels. Such GHG reductions depend greatly on how the biomass is cultivated, transported, processed, and converted into fuel or electricity. Any policy aiming to reduce GHGs with biomass-based energy must account for uncertainties in emissions at each stage of production, or else it risks yielding marginal reductions, if any, while potentially imposing great costs. This paper provides a framework for incorporating uncertainty analysis specifically into estimates of the life cycle GHG emissions from the production of biomass. We outline the sources of uncertainty, discuss the implications of uncertainty and variability on the limits of life cycle assessment (LCA) models, and provide a guide for practitioners to best practices in modeling these uncertainties. The suite of techniques described herein can be used to improve the understanding and the representation of the uncertainties associated with emissions estimates, thus enabling improved decision making with respect to the use of biomass for energy and fuel production. -- Highlights: → We describe key model, scenario and data uncertainties in LCAs of biobased fuels. → System boundaries and allocation choices should be consistent with study goals. → Scenarios should be designed around policy levers that can be controlled. → We describe a new way to analyze the importance of covariance between inputs.

  10. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  11. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    International Nuclear Information System (INIS)

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Kessler, R.; Frieman, J. A.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-01-01

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w input – w recovered ) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  12. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  13. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    Energy Technology Data Exchange (ETDEWEB)

    Rypdal, Kristin; Baritz, Rainer

    2002-07-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  14. Evaluating Uncertainties in Sap Flux Scaled Estimates of Forest Transpiration, Canopy Conductance and Photosynthesis

    Science.gov (United States)

    Ward, E. J.; Bell, D. M.; Clark, J. S.; Kim, H.; Oren, R.

    2009-12-01

    Thermal dissipation probes (TDPs) are a common method for estimating forest transpiration and canopy conductance from sap flux rates in trees, but their implementation is plagued by uncertainties arising from missing data and variability in the diameter and canopy position of trees, as well as sapwood conductivity within individual trees. Uncertainties in estimates of canopy conductance also translate into uncertainties in carbon assimilation in models such as the Canopy Conductance Constrained Carbon Assimilation (4CA) model that combine physiological and environmental data to estimate photosynthetic rates. We developed a method to propagate these uncertainties in the scaling and imputation of TDP data to estimates of canopy transpiration and conductance using a state-space Jarvis-type conductance model in a hierarchical Bayesian framework. This presentation will focus on the impact of these uncertainties on estimates of water and carbon fluxes using 4CA and data from the Duke Free Air Carbon Enrichment (FACE) project, which incorporates both elevated carbon dioxide and soil nitrogen treatments. We will also address the response of canopy conductance to vapor pressure deficit, incident radiation and soil moisture, as well as the effect of treatment-related stand structure differences in scaling TDP measurements. Preliminary results indicate that in 2006, a year of normal precipitation (1127 mm), canopy transpiration increased in elevated carbon dioxide ~8% on a ground area basis. In 2007, a year with a pronounced drought (800 mm precipitation), this increase was only present in the combined carbon dioxide and fertilization treatment. The seasonal dynamics of water and carbon fluxes will be discussed in detail.

  15. Uncertainty of chromatic dispersion estimation from transmitted waveforms in direct detection systems

    Science.gov (United States)

    Lach, Zbigniew T.

    2017-08-01

    A possibility is shown of a non-disruptive estimation of chromatic dispersion in a fiber of an intensity modulation communication line under work conditions. Uncertainty of the chromatic dispersion estimates is analyzed and quantified with the use of confidence intervals.

  16. Linear minimax estimation for random vectors with parametric uncertainty

    KAUST Repository

    Bitar, E

    2010-06-01

    In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).

  17. Statistical characterization of roughness uncertainty and impact on wind resource estimation

    Directory of Open Access Journals (Sweden)

    M. Kelly

    2017-04-01

    Full Text Available In this work we relate uncertainty in background roughness length (z0 to uncertainty in wind speeds, where the latter are predicted at a wind farm location based on wind statistics observed at a different site. Sensitivity of predicted winds to roughness is derived analytically for the industry-standard European Wind Atlas method, which is based on the geostrophic drag law. We statistically consider roughness and its corresponding uncertainty, in terms of both z0 derived from measured wind speeds as well as that chosen in practice by wind engineers. We show the combined effect of roughness uncertainty arising from differing wind-observation and turbine-prediction sites; this is done for the case of roughness bias as well as for the general case. For estimation of uncertainty in annual energy production (AEP, we also develop a generalized analytical turbine power curve, from which we derive a relation between mean wind speed and AEP. Following our developments, we provide guidance on approximate roughness uncertainty magnitudes to be expected in industry practice, and we also find that sites with larger background roughness incur relatively larger uncertainties.

  18. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    Science.gov (United States)

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  19. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  20. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  1. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    Science.gov (United States)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  2. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    International Nuclear Information System (INIS)

    Boomsma, Aaron; Troolin, Dan; Pothos, Stamatios; Bhattacharya, Sayantan; Vlachos, Pavlos

    2016-01-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  3. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  4. Comparison of two perturbation methods to estimate the land surface modeling uncertainty

    Science.gov (United States)

    Su, H.; Houser, P.; Tian, Y.; Kumar, S.; Geiger, J.; Belvedere, D.

    2007-12-01

    In land surface modeling, it is almost impossible to simulate the land surface processes without any error because the earth system is highly complex and the physics of the land processes has not yet been understood sufficiently. In most cases, people want to know not only the model output but also the uncertainty in the modeling, to estimate how reliable the modeling is. Ensemble perturbation is an effective way to estimate the uncertainty in land surface modeling, since land surface models are highly nonlinear which makes the analytical approach not applicable in this estimation. The ideal perturbation noise is zero mean Gaussian distribution, however, this requirement can't be satisfied if the perturbed variables in land surface model have physical boundaries because part of the perturbation noises has to be removed to feed the land surface models properly. Two different perturbation methods are employed in our study to investigate their impact on quantifying land surface modeling uncertainty base on the Land Information System (LIS) framework developed by NASA/GSFC land team. One perturbation method is the built-in algorithm named "STATIC" in LIS version 5; the other is a new perturbation algorithm which was recently developed to minimize the overall bias in the perturbation by incorporating additional information from the whole time series for the perturbed variable. The statistical properties of the perturbation noise generated by the two different algorithms are investigated thoroughly by using a large ensemble size on a NASA supercomputer and then the corresponding uncertainty estimates based on the two perturbation methods are compared. Their further impacts on data assimilation are also discussed. Finally, an optimal perturbation method is suggested.

  5. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  6. Quantifying the Effects of Historical Land Cover Conversion Uncertainty on Global Carbon and Climate Estimates

    Science.gov (United States)

    Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.

    2018-01-01

    Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.

  7. A Best-Estimate Reactor Core Monitor Using State Feedback Strategies to Reduce Uncertainties

    International Nuclear Information System (INIS)

    Martin, Robert P.; Edwards, Robert M.

    2000-01-01

    The development and demonstration of a new algorithm to reduce modeling and state-estimation uncertainty in best-estimate simulation codes has been investigated. Demonstration is given by way of a prototype reactor core monitor. The architecture of this monitor integrates a control-theory-based, distributed-parameter estimation technique into a production-grade best-estimate simulation code. The Kalman Filter-Sequential Least-Squares (KFSLS) parameter estimation algorithm has been extended for application into the computational environment of the best-estimate simulation code RELAP5-3D. In control system terminology, this configuration can be thought of as a 'best-estimate' observer. The application to a distributed-parameter reactor system involves a unique modal model that approximates physical components, such as the reactor, by describing both states and parameters by an orthogonal expansion. The basic KFSLS parameter estimation is used to dynamically refine a spatially varying (distributed) parameter. The application of the distributed-parameter estimator is expected to complement a traditional nonlinear best-estimate simulation code by providing a mechanism for reducing both code input (modeling) and output (state-estimation) uncertainty in complex, distributed-parameter systems

  8. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  9. Sensitivity of Process Design due to Uncertainties in Property Estimates

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent

    2012-01-01

    The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design....

  10. Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)

    2016-10-15

    In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.

  11. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  12. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  13. Review of uncertainty estimates associated with models for assessing the impact of breeder reactor radioactivity releases

    International Nuclear Information System (INIS)

    Miller, C.; Little, C.A.

    1982-08-01

    The purpose is to summarize estimates based on currently available data of the uncertainty associated with radiological assessment models. The models being examined herein are those recommended previously for use in breeder reactor assessments. Uncertainty estimates are presented for models of atmospheric and hydrologic transport, terrestrial and aquatic food-chain bioaccumulation, and internal and external dosimetry. Both long-term and short-term release conditions are discussed. The uncertainty estimates presented in this report indicate that, for many sites, generic models and representative parameter values may be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, especially those from breeder reactors located in sites dominated by complex terrain and/or coastal meteorology, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under these circumstances to reduce this uncertainty. However, even using site-specific information, natural variability and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose or concentration in environmental media following shortterm releases

  14. Study of the uncertainty in estimation of the exposure of non-human biota to ionising radiation.

    Science.gov (United States)

    Avila, R; Beresford, N A; Agüero, A; Broed, R; Brown, J; Iospje, M; Robles, B; Suañez, A

    2004-12-01

    Uncertainty in estimations of the exposure of non-human biota to ionising radiation may arise from a number of sources including values of the model parameters, empirical data, measurement errors and biases in the sampling. The significance of the overall uncertainty of an exposure assessment will depend on how the estimated dose compares with reference doses used for risk characterisation. In this paper, we present the results of a study of the uncertainty in estimation of the exposure of non-human biota using some of the models and parameters recommended in the FASSET methodology. The study was carried out for semi-natural terrestrial, agricultural and marine ecosystems, and for four radionuclides (137Cs, 239Pu, 129I and 237Np). The parameters of the radionuclide transfer models showed the highest sensitivity and contributed the most to the uncertainty in the predictions of doses to biota. The most important ones were related to the bioavailability and mobility of radionuclides in the environment, for example soil-to-plant transfer factors, the bioaccumulation factors for marine biota and the gut uptake fraction for terrestrial mammals. In contrast, the dose conversion coefficients showed low sensitivity and contributed little to the overall uncertainty. Radiobiological effectiveness contributed to the overall uncertainty of the dose estimations for alpha emitters although to a lesser degree than a number of transfer model parameters.

  15. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  16. Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation

    Science.gov (United States)

    Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.

    2018-02-01

    The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.

  17. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    Science.gov (United States)

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Estimation of uncertainty of measurements of 3D mechanisms after kinematic calibration

    International Nuclear Information System (INIS)

    Takamasu, K; Sato, O; Shimojima, K; Takahashi, S; Furutani, R

    2005-01-01

    Calibration methods for 3D mechanisms are necessary to use the mechanisms as coordinate measuring machines. The calibration method of coordinate measuring machine using artifacts, the artifact calibration method, is proposed in taking account of traceability of the mechanism. There are kinematic parameters and form-deviation parameters in geometric parameters for describing the forward kinematic of the mechanism. In this article, the estimation methods of uncertainties using the calibrated coordinate measuring machine after the calibration are formulated. Firstly, the calculation method which takes out the values of kinematic parameters using least squares method is formulated. Secondly, the estimation value of uncertainty of the measuring machine is calculated using the error propagation method

  19. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.

  20. Measuring Cross-Section and Estimating Uncertainties with the fissionTPC

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-30

    The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.

  1. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  2. Top-down instead of bottom-up estimates of uncertainty in INAA results?

    International Nuclear Information System (INIS)

    Bode, P.; De Nadai Fernandes, E.A.

    2005-01-01

    The initial publication of the ISO Guide to the Expression of Uncertainty in Measurement (GUM) and many related documents has resulted in a worldwide awareness of the importance of a realistic estimate of the value reported after the +/- sign. The evaluation of uncertainty in measurement, as introduced by the GUM, is derived from the principles applied in physical measurements. Many testing laboratories have already experienced large problems in applying these principles in e.g. (bio)chemical measurements, resulting in time-consuming evaluations and costly additional experiments. Other, more pragmatic and less costly approaches have been proposed to obtain a realistic estimate of the range in which the true value of the measurement may be found with a certain degree of probability. One of these approaches, the 'top-down method', is based on the standard deviation in the results of intercomparison data. This approach is much easier for tests for which it is either difficult to establish a full measurement equation, or if e.g. matrix-matching reference materials are absent. It has been demonstrated that the GUM 'bottom-up' approach of evaluating uncertainty in measurement can easily be applied in instrumental neutron activation analysis (INAA) as all significant sources of uncertainty can be evaluated. INAA is therefore a valuable technique to test the validity of the top-down approach. In this contribution, examples of the top-down evaluation of uncertainty in INAA derived from participation in intercomparison rounds and proficiency testing schemes will be presented. The results will be compared with the bottom-up evaluation of uncertainty, and ease of applicability, validity and usefullness of both approaches will be discussed.

  3. Uncertainty in urban flood damage assessment due to urban drainage modelling and depth-damage curve estimation.

    Science.gov (United States)

    Freni, G; La Loggia, G; Notaro, V

    2010-01-01

    Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly

  4. Applying the conservativeness principle to REDD to deal with the uncertainties of the estimates

    International Nuclear Information System (INIS)

    Grassi, Giacomo; Monni, Suvi; Achard, Frederic; Mollicone, Danilo; Federici, Sandro

    2008-01-01

    A common paradigm when the reduction of emissions from deforestations is estimated for the purpose of promoting it as a mitigation option in the context of the United Nations Framework Convention on Climate Change (UNFCCC) is that high uncertainties in input data-i.e., area change and C stock change/area-may seriously undermine the credibility of the estimates and therefore of reduced deforestation as a mitigation option. In this paper, we show how a series of concepts and methodological tools-already existing in UNFCCC decisions and IPCC guidance documents-may greatly help to deal with the uncertainties of the estimates of reduced emissions from deforestation

  5. Applying the conservativeness principle to REDD to deal with the uncertainties of the estimates

    Energy Technology Data Exchange (ETDEWEB)

    Grassi, Giacomo; Monni, Suvi; Achard, Frederic [Institute for Environment and Sustainability, Joint Research Centre of the European Commission, I-21020 Ispra (Italy); Mollicone, Danilo [Department of Geography, University of Alcala de Henares, Madrid (Spain); Federici, Sandro

    2008-07-15

    A common paradigm when the reduction of emissions from deforestations is estimated for the purpose of promoting it as a mitigation option in the context of the United Nations Framework Convention on Climate Change (UNFCCC) is that high uncertainties in input data-i.e., area change and C stock change/area-may seriously undermine the credibility of the estimates and therefore of reduced deforestation as a mitigation option. In this paper, we show how a series of concepts and methodological tools-already existing in UNFCCC decisions and IPCC guidance documents-may greatly help to deal with the uncertainties of the estimates of reduced emissions from deforestation.

  6. Determining the Uncertainties in Prescribed Burn Emissions Through Comparison of Satellite Estimates to Ground-based Estimates and Air Quality Model Evaluations in Southeastern US

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A. G.

    2016-12-01

    Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if

  7. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  8. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  9. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  10. An estimation of uncertainties in containment P/T analysis using CONTEMPT/LT code

    International Nuclear Information System (INIS)

    Kang, Y.M.; Park, G.C.; Lee, U.C.; Kang, C.S.

    1991-01-01

    In a nuclear power plant, the containment design pressure and temperature (P/T) have been established based on the unrealistic conservatism with suffering from a drawback in the economics. Thus, it is necessary that the uncertainties of design P/T values have to be well defined through an extensive uncertainty analysis with plant-specific input data and or models used in the computer code. This study is to estimate plant-specific uncertainties of containment design P/T using the Monte Carlo method in Kori-3 reactor. Kori-3 plant parameters and Uchida heat transfer coefficient are selected to be treated statistically after the sensitivity study. The Monte Carlo analysis has performed based on the response surface method with the CONTEMPT/LT code and Latin Hypercube sampling technique. Finally, the design values based on 95 %/95 % probability are compared with worst estimated values to assess the design margin. (author)

  11. Revised cost savings estimate with uncertainty for enhanced sludge washing of underground storage tank waste

    International Nuclear Information System (INIS)

    DeMuth, S.

    1998-01-01

    Enhanced Sludge Washing (ESW) has been selected to reduce the amount of sludge-based underground storage tank (UST) high-level waste at the Hanford site. During the past several years, studies have been conducted to determine the cost savings derived from the implementation of ESW. The tank waste inventory and ESW performance continues to be revised as characterization and development efforts advance. This study provides a new cost savings estimate based upon the most recent inventory and ESW performance revisions, and includes an estimate of the associated cost uncertainty. Whereas the author's previous cost savings estimates for ESW were compared against no sludge washing, this study assumes the baseline to be simple water washing which more accurately reflects the retrieval activity along. The revised ESW cost savings estimate for all UST waste at Hanford is $6.1 B ± $1.3 B within 95% confidence. This is based upon capital and operating cost savings, but does not include development costs. The development costs are assumed negligible since they should be at least an order of magnitude less than the savings. The overall cost savings uncertainty was derived from process performance uncertainties and baseline remediation cost uncertainties, as determined by the author's engineering judgment

  12. Aboveground Forest Biomass Estimation with Landsat and LiDAR Data and Uncertainty Analysis of the Estimates

    OpenAIRE

    Dengsheng Lu; Qi Chen; Guangxing Wang; Emilio Moran; Mateus Batistella; Maozhen Zhang; Gaia Vaglio Laurin; David Saah

    2012-01-01

    Landsat Thematic mapper (TM) image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonst...

  13. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    Science.gov (United States)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in

  14. Estimation of sedimentary proxy records together with associated uncertainty

    OpenAIRE

    Goswami, B.; Heitzig, J.; Rehfeld, K.; Marwan, N.; Anoop, A.; Prasad, S.; Kurths, J.

    2014-01-01

    Sedimentary proxy records constitute a significant portion of the recorded evidence that allows us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record intercomparisons. While there are various age-modeling approaches to improve the estimation of the age–depth relations of archives, relatively little focus has been placed on the propagation...

  15. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    Science.gov (United States)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  16. Pragmatic aspects of uncertainty propagation: A conceptual review

    KAUST Repository

    Thacker, W.Carlisle; Iskandarani, Mohamad; Gonç alves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar

    2015-01-01

    When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.

  17. Pragmatic aspects of uncertainty propagation: A conceptual review

    KAUST Repository

    Thacker, W.Carlisle

    2015-09-11

    When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.

  18. Estimation of CO2 emissions from China’s cement production: Methodologies and uncertainties

    International Nuclear Information System (INIS)

    Ke, Jing; McNeil, Michael; Price, Lynn; Khanna, Nina Zheng; Zhou, Nan

    2013-01-01

    In 2010, China’s cement output was 1.9 Gt, which accounted for 56% of world cement production. Total carbon dioxide (CO 2 ) emissions from Chinese cement production could therefore exceed 1.2 Gt. The magnitude of emissions from this single industrial sector in one country underscores the need to understand the uncertainty of current estimates of cement emissions in China. This paper compares several methodologies for calculating CO 2 emissions from cement production, including the three main components of emissions: direct emissions from the calcination process for clinker production, direct emissions from fossil fuel combustion and indirect emissions from electricity consumption. This paper examines in detail the differences between common methodologies for each emission component, and considers their effect on total emissions. We then evaluate the overall level of uncertainty implied by the differences among methodologies according to recommendations of the Joint Committee for Guides in Metrology. We find a relative uncertainty in China’s cement-related emissions in the range of 10 to 18%. This result highlights the importance of understanding and refining methods of estimating emissions in this important industrial sector. - Highlights: ► CO 2 emission estimates are critical given China’s cement production scale. ► Methodological differences for emission components are compared. ► Results show relative uncertainty in China’s cement-related emissions of about 10%. ► IPCC Guidelines and CSI Cement CO 2 and Energy Protocol are recommended

  19. Uncertainty Estimates in Cold Critical Eigenvalue Predictions

    International Nuclear Information System (INIS)

    Karve, Atul A.; Moore, Brian R.; Mills, Vernon W.; Marrotte, Gary N.

    2005-01-01

    A recent cycle of a General Electric boiling water reactor performed two beginning-of-cycle local cold criticals. The eigenvalues estimated by the core simulator were 0.99826 and 1.00610. The large spread in them (= 0.00784) is a source of concern, and it is studied here. An analysis process is developed using statistical techniques, where first a transfer function relating the core observable Y (eigenvalue) to various factors (X's) is established. Engineering judgment is used to recognize the best candidates for X's. They are identified as power-weighted assembly k ∞ 's of selected assemblies around the withdrawn rods. These are a small subset of many X's that could potentially influence Y. However, the intention here is not to do a comprehensive study by accounting for all the X's. Rather, the scope is to demonstrate that the process developed is reasonable and to show its applicability to performing detailed studies. Variability in X's is obtained by perturbing nodal k ∞ 's since they directly influence the buckling term in the quasi-two-group diffusion equation model of the core simulator. Any perturbations introduced in them are bounded by standard well-established uncertainties. The resulting perturbations in the X's may not necessarily be directly correlated to physical attributes, but they encompass numerous biases and uncertainties credited to input and modeling uncertainties. The 'vital few' from the 'unimportant many' X's are determined, and then they are subgrouped according to assembly type, location, exposure, and control rod insertion. The goal is to study how the subgroups influence Y in order to have a better understanding of the variability observed in it

  20. Uncertainty in reliability estimation : when do we know everything we know?

    NARCIS (Netherlands)

    Houben, M.J.H.A.; Sonnemans, P.J.M.; Newby, M.J.; Bris, R.; Guedes Soares, C.; Martorell, S.

    2009-01-01

    In this paperwe demonstrate the use of an adapted GroundedTheory approach through interviews and their analysis to determine explicit uncertainty (known unknowns) for reliability estimation in the early phases of product development.We have applied the adapted Grounded Theory approach in a case

  1. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    Science.gov (United States)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ ∑ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ ∑ NEE ˜ 4.0-13.5 g C

  2. Analysis of uncertainties in a probabilistic seismic hazard estimation, example for France

    International Nuclear Information System (INIS)

    Beauval, C.

    2003-12-01

    This thesis proposes a new methodology that allows to pinpoint the key parameters that control probabilistic seismic hazard assessment (PSHA) and at the same time to quantify the impact of these parameters uncertainties on hazard estimates. Cornell-McGuire's method is used here. First, uncertainties on magnitude and location determinations are modeled and quantified: resulting variability on hazard estimates ranges between 5% and 25% (=COV), depending on the site and the return period. An impact study is then performed, in order to determine the hierarchy between the impacts on hazard of the choices of four other parameters: intensity-magnitude correlation, minimum and maximum magnitudes, the truncation of the attenuation relationship. The results at 34 Hz (PGA) indicate that the maximum magnitude is the less influent parameter (from 100 to 10000 years); whereas the intensity-magnitude correlation and the truncation of ground motion predictions (>2σ) are the controlling parameters at all return periods (up to 30% decrease each at 10000 years). An increase in the minimum magnitude contributing to the hazard, from 3.5 to 4.5, can also produce non-negligible impacts at small return periods (up to 20% decrease of hazard results at 475 years). Finally, the overall variability on hazard estimates due to the combined choices of the four parameters can reach up to 30% (COV, at 34 Hz). For lower frequencies (<5 Hz), the overall variability increases and maximum magnitude becomes a controlling parameter. Therefore, variability of estimates due to catalog uncertainties and to the choices of these four parameters must be taken into account in all probabilistic seismic hazard studies in France. To reduce variability in hazard estimates, future research should concentrate on the elaboration of an appropriate intensity- magnitude correlation, as well as on a more realistic way of taking into account ground motion dispersion. (author)

  3. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  4. Best-estimate reactor core monitor using state feedback strategies to resolve uncertainties

    International Nuclear Information System (INIS)

    Martin, R.P.

    1997-01-01

    The development and demonstration of a new algorithm for quantifying uncertainty in best-estimate simulation codes has been investigated. Demonstration is given by way of a prototype reactor core monitor. The architecture of this monitor integrates a distributed parameter estimation technique and the infrastructure required to support this control theory-based algorithm into a production-grade best-estimate simulation code. The Kalman filter with the sequential least-squares parameter estimation algorithm has been extended for application into the computational environment of a best-estimate simulation code, i.e., RELAP5/DOE. In control system terminology this configuration can be thought of as a best-estimate observer

  5. Method for estimating effects of unknown correlations in spectral irradiance data on uncertainties of spectrally integrated colorimetric quantities

    Science.gov (United States)

    Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki

    2017-08-01

    Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.

  6. Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements

    Science.gov (United States)

    Ulrich, Thomas

    2013-08-01

    Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.

  7. A new evaluation of the uncertainty associated with CDIAC estimates of fossil fuel carbon dioxide emission

    Directory of Open Access Journals (Sweden)

    Robert J. Andres

    2014-07-01

    Full Text Available Three uncertainty assessments associated with the global total of carbon dioxide emitted from fossil fuel use and cement production are presented. Each assessment has its own strengths and weaknesses and none give a full uncertainty assessment of the emission estimates. This approach grew out of the lack of independent measurements at the spatial and temporal scales of interest. Issues of dependent and independent data are considered as well as the temporal and spatial relationships of the data. The result is a multifaceted examination of the uncertainty associated with fossil fuel carbon dioxide emission estimates. The three assessments collectively give a range that spans from 1.0 to 13% (2 σ. Greatly simplifying the assessments give a global fossil fuel carbon dioxide uncertainty value of 8.4% (2 σ. In the largest context presented, the determination of fossil fuel emission uncertainty is important for a better understanding of the global carbon cycle and its implications for the physical, economic and political world.

  8. ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES

    Energy Technology Data Exchange (ETDEWEB)

    Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ, 85734 (United States)

    2013-09-20

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.

  9. ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES

    International Nuclear Information System (INIS)

    Dolphin, Andrew E.

    2013-01-01

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail

  10. Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops

    Science.gov (United States)

    Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said

    2017-11-01

    The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.

  11. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  12. Analysis of uncertainties in the estimates of nitrous oxide and methane emissions in the UK's greenhouse gas inventory for agriculture

    Science.gov (United States)

    Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.

    2014-01-01

    The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.

  13. Uncertainty and sensitivity studies supporting the interpretation of the results of TVO I/II PRA

    International Nuclear Information System (INIS)

    Holmberg, J.

    1992-01-01

    A comprehensive Level 1 probabilistic risk assessment (PRA) has been performed for the TVO I/II nuclear power units. As a part of the PRA project, uncertainties of risk models and methods were systematically studied in order to describe them and to demonstrate their impact by way of results. The uncertainty study was divided into two phases: a qualitative and a quantitative study. The qualitative study contained identification of uncertainties and qualitative assessments of their importance. The PRA was introduced, and identified assumptions and uncertainties behind the models were documented. The most significant uncertainties were selected by importance measures or other judgements for further quantitative studies. The quantitative study included sensitivity studies and propagation of uncertainty ranges. In the sensitivity studies uncertain assumptions or parameters were varied in order to illustrate the sensitivity of the models. The propagation of the uncertainty ranges demonstrated the impact of the statistical uncertainties of the parameter values. The Monte Carlo method was used as a propagation method. The most significant uncertainties were those involved in modelling human interactions, dependences and common cause failures (CCFs), loss of coolant accident (LOCA) frequencies and pressure suppression. The qualitative mapping out of the uncertainty factors turned out to be useful in planning quantitative studies. It also served as internal review of the assumptions made in the PRA. The sensitivity studies were perhaps the most advantageous part of the quantitative study because they allowed individual analyses of the significance of uncertainty sources identified. The uncertainty study was found reasonable in systematically and critically assessing uncertainties in a risk analysis. The usefulness of this study depends on the decision maker (power company) since uncertainty studies are primarily carried out to support decision making when uncertainties are

  14. Uncertainties in Early Stage Capital Cost Estimation of Process Design – A case study on biorefinery design

    Directory of Open Access Journals (Sweden)

    Gurkan eSin

    2015-02-01

    Full Text Available Capital investment, next to the product demand, sales and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early stage design is a challenging task. This is especially important in biorefinery research, where available information and experiences with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs (a Bootstrapping as a regression method when cost data is available and (b the Monte Carlo technique as an error propagation method based on expert input when cost data is not available. Four well-known models for early stage cost estimation are reviewed an analyzed using the methodology. The significance of uncertainties of cost data for early stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision making under uncertainties. One of the results using an order-of-magnitude estimate shows that the production of diethyl ether and 1,3-butadiene are the most promising with economic risks of 0.24 MM$/a and 4.6 MM$/a due to uncertainties in cost estimations, respectively.

  15. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  16. Uncertainty in CH4 and N2O emission estimates from a managed fen meadow using EC measurements

    International Nuclear Information System (INIS)

    Kroon, P.S.; Hensen, A.; Van 't Veen, W.H.; Vermeulen, A.T.; Jonker, H.

    2009-02-01

    The overall uncertainty in annual flux estimates derived from chamber measurements may be as high as 50% due to the temporal and spatial variability in the fluxes. As even a large number of chamber plots still cover typically less than 1% of the total field area, the field-scale integrated emission necessarily remains a matter of speculation. High frequency micrometeorological methods are a good option for obtaining integrated estimates on a hectare scale with a continuous coverage in time. Instrumentation is now becoming available that meets the requirements for CH4 and N2O eddy covariance (EC) measurements. A system consisting of a quantum cascade laser (QCL) spectrometer and a sonic anemometer has recently been proven to be suitable for performing EC measurements. This study analyses the EC flux measurements of CH4 and N2O and its corrections, like calibration, Webb-correction, and corrections for high and low frequency losses, and assesses the magnitude of the uncertainties associated with the precision of the measurement instruments, measurement set-up and the methodology. The uncertainty of one single EC flux measurement, a daily, monthly and 3-monthly average EC flux is estimated. In addition, the cumulative emission of C-CH4 and N-N2O and their uncertainties are determined over several fertilizing events at a dairy farm site in the Netherlands. These fertilizing events are selected from the continuously EC flux measurements from August 2006 to September 2008. The EC flux uncertainties are compared by the overall uncertainty in annual flux estimates derived from chamber measurements. It will be shown that EC flux measurements can decrease the overall uncertainty in annual flux estimates

  17. Uncertainty in CH4 and N2O emission estimates from a managed fen meadow using EC measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kroon, P.S.; Hensen, A.; Van ' t Veen, W.H.; Vermeulen, A.T. [ECN Biomass, Coal and Environment, Petten (Netherlands); Jonker, H. [Delft University of Technology, Delft (Netherlands)

    2009-02-15

    The overall uncertainty in annual flux estimates derived from chamber measurements may be as high as 50% due to the temporal and spatial variability in the fluxes. As even a large number of chamber plots still cover typically less than 1% of the total field area, the field-scale integrated emission necessarily remains a matter of speculation. High frequency micrometeorological methods are a good option for obtaining integrated estimates on a hectare scale with a continuous coverage in time. Instrumentation is now becoming available that meets the requirements for CH4 and N2O eddy covariance (EC) measurements. A system consisting of a quantum cascade laser (QCL) spectrometer and a sonic anemometer has recently been proven to be suitable for performing EC measurements. This study analyses the EC flux measurements of CH4 and N2O and its corrections, like calibration, Webb-correction, and corrections for high and low frequency losses, and assesses the magnitude of the uncertainties associated with the precision of the measurement instruments, measurement set-up and the methodology. The uncertainty of one single EC flux measurement, a daily, monthly and 3-monthly average EC flux is estimated. In addition, the cumulative emission of C-CH4 and N-N2O and their uncertainties are determined over several fertilizing events at a dairy farm site in the Netherlands. These fertilizing events are selected from the continuously EC flux measurements from August 2006 to September 2008. The EC flux uncertainties are compared by the overall uncertainty in annual flux estimates derived from chamber measurements. It will be shown that EC flux measurements can decrease the overall uncertainty in annual flux estimates.

  18. A Carbon Monitoring System Approach to US Coastal Wetland Carbon Fluxes: Progress Towards a Tier II Accounting Method with Uncertainty Quantification

    Science.gov (United States)

    Windham-Myers, L.; Holmquist, J. R.; Bergamaschi, B. A.; Byrd, K. B.; Callaway, J.; Crooks, S.; Drexler, J. Z.; Feagin, R. A.; Ferner, M. C.; Gonneea, M. E.; Kroeger, K. D.; Megonigal, P.; Morris, J. T.; Schile, L. M.; Simard, M.; Sutton-Grier, A.; Takekawa, J.; Troxler, T.; Weller, D.; Woo, I.

    2015-12-01

    Despite their high rates of long-term carbon (C) sequestration when compared to upland ecosystems, coastal C accounting is only recently receiving the attention of policy makers and carbon markets. Assessing accuracy and uncertainty in net C flux estimates requires both direct and derived measurements based on both short and long term dynamics in key drivers, particularly soil accretion rates and soil organic content. We are testing the ability of remote sensing products and national scale datasets to estimate biomass and soil stocks and fluxes over a wide range of spatial and temporal scales. For example, the 2013 Wetlands Supplement to the 2006 IPCC GHG national inventory reporting guidelines requests information on development of Tier I-III reporting, which express increasing levels of detail. We report progress toward development of a Carbon Monitoring System for "blue carbon" that may be useful for IPCC reporting guidelines at Tier II levels. Our project uses a current dataset of publically available and contributed field-based measurements to validate models of changing soil C stocks, across a broad range of U.S. tidal wetland types and landuse conversions. Additionally, development of biomass algorithms for both radar and spectral datasets will be tested and used to determine the "price of precision" of different satellite products. We discuss progress in calculating Tier II estimates focusing on variation introduced by the different input datasets. These include the USFWS National Wetlands Inventory, NOAA Coastal Change Analysis Program, and combinations to calculate tidal wetland area. We also assess the use of different attributes and depths from the USDA-SSURGO database to map soil C density. Finally, we examine the relative benefit of radar, spectral and hybrid approaches to biomass mapping in tidal marshes and mangroves. While the US currently plans to report GHG emissions at a Tier I level, we argue that a Tier II analysis is possible due to national

  19. Estimating radar reflectivity - snowfall rate relationships and their uncertainties over Antarctica by combining disdrometer and radar observations

    Science.gov (United States)

    Souverijns, Niels; Gossart, Alexandra; Lhermitte, Stef; Gorodetskaya, Irina; Kneifel, Stefan; Maahn, Maximilian; Bliven, Francis; van Lipzig, Nicole

    2017-04-01

    The Antarctic Ice Sheet (AIS) is the largest ice body on earth, having a volume equivalent to 58.3 m global mean sea level rise. Precipitation is the dominant source term in the surface mass balance of the AIS. However, this quantity is not well constrained in both models and observations. Direct observations over the AIS are also not coherent, as they are sparse in space and time and acquisition techniques differ. As a result, precipitation observations stay mostly limited to continent-wide averages based on satellite radar observations. Snowfall rate (SR) at high temporal resolution can be derived from the ground-based radar effective reflectivity factor (Z) using information about snow particle size and shape. Here we present reflectivity snowfall rate relations (Z = aSRb) for the East Antarctic escarpment region using the measurements at the Princess Elisabeth (PE) station and an overview of their uncertainties. A novel technique is developed by combining an optical disdrometer (NASA's Precipitation Imaging Package; PIP) and a vertically pointing 24 GHz FMCW micro rain radar (Metek's MRR) in order to reduce the uncertainty in SR estimates. PIP is used to obtain information about snow particle characteristics and to get an estimate of Z, SR and the Z-SR relation. For PE, located 173 km inland, the relation equals Z = 18SR1.1. The prefactor (a) of the relation is sensitive to the median diameter of the particles. Larger particles, found closer to the coast, lead to an increase of the value of the prefactor. More inland locations, where smaller snow particles are found, obtain lower values for the prefactor. The exponent of the Z-SR relation (b) is insensitive to the median diameter of the snow particles. This dependence of the prefactor of the Z-SR relation to the particle size needs to be taken into account when converting radar reflectivities to snowfall rates over Antarctica. The uncertainty on the Z-SR relations is quantified using a bootstrapping approach

  20. Optimal estimation retrieval of aerosol microphysical properties from SAGE II satellite observations in the volcanically unperturbed lower stratosphere

    Directory of Open Access Journals (Sweden)

    T. Deshler

    2010-05-01

    distributions naturally differ from the correct bimodal values, the associated surface area (A and volume densities (V are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm−3 (A and 0.05 μm3 cm−3 (V, where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm−3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20–50% and 10–40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  1. Optimal estimation retrieval of aerosol microphysical properties from SAGE~II satellite observations in the volcanically unperturbed lower stratosphere

    Science.gov (United States)

    Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.

    2010-05-01

    differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  2. Information Theory for Correlation Analysis and Estimation of Uncertainty Reduction in Maps and Models

    Directory of Open Access Journals (Sweden)

    J. Florian Wellmann

    2013-04-01

    Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.

  3. The Effect of Uncertainty in Exposure Estimation on the Exposure-Response Relation between 1,3-Butadiene and Leukemia

    Directory of Open Access Journals (Sweden)

    George Maldonado

    2009-09-01

    Full Text Available Abstract: In a follow-up study of mortality among North American synthetic rubber industry workers, cumulative exposure to 1,3-butadiene was positively associated with leukemia. Problems with historical exposure estimation, however, may have distorted the association. To evaluate the impact of potential inaccuracies in exposure estimation, we conducted uncertainty analyses of the relation between cumulative exposure to butadiene and leukemia. We created the 1,000 sets of butadiene estimates using job-exposure matrices consisting of exposure values that corresponded to randomly selected percentiles of the approximate probability distribution of plant-, work area/job group-, and year specific butadiene ppm. We then analyzed the relation between cumulative exposure to butadiene and leukemia for each of the 1,000 sets of butadiene estimates. In the uncertainty analysis, the point estimate of the RR for the first non zero exposure category (>0–<37.5 ppm-years was most likely to be about 1.5. The rate ratio for the second exposure category (37.5–<184.7 ppm-years was most likely to range from 1.5 to 1.8. The RR for category 3 of exposure (184.7–<425.0 ppm-years was most likely between 2.1 and 3.0. The RR for the highest exposure category (425.0+ ppm-years was likely to be between 2.9 and 3.7. This range off RR point estimates can best be interpreted as a probability distribution that describes our uncertainty in RR point estimates due to uncertainty in exposure estimation. After considering the complete probability distributions of butadiene exposure estimates, the exposure-response association of butadiene and leukemia was maintained. This exercise was a unique example of how uncertainty analyses can be used to investigate and support an observed measure of effect when occupational exposure estimates are employed in the absence of direct exposure measurements.

  4. Uncertainty in action-value estimation affects both action choice and learning rate of the choice behaviors of rats.

    Science.gov (United States)

    Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu

    2012-04-01

    The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20-549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  5. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  6. Estimating Uncertainties of Ship Course and Speed in Early Navigations using ICOADS3.0

    Science.gov (United States)

    Chan, D.; Huybers, P. J.

    2017-12-01

    Information on ship position and its uncertainty is potentially important for mapping out climatologists and changes in SSTs. Using the 2-hourly ship reports from the International Comprehensive Ocean Atmosphere Dataset 3.0 (ICOADS 3.0), we estimate the uncertainties of ship course, ship speed, and latitude/longitude corrections during 1870-1900. After reviewing the techniques used in early navigations, we build forward navigation model that uses dead reckoning technique, celestial latitude corrections, and chronometer longitude corrections. The modeled ship tracks exhibit jumps in longitude and latitude, when a position correction is applied. These jumps are also seen in ICOADS3.0 observations. In this model, position error at the end of each day increases following a 2D random walk; the latitudinal/longitude errors are reset when a latitude/longitude correction is applied.We fit the variance of the magnitude of latitude/longitude corrections in the observation against model outputs, and estimate that the standard deviation of uncertainty is 5.5 degree for ship course, 32% for ship speed, 22km for latitude correction, and 27km for longitude correction. The estimates here are informative priors for Bayesian methods that quantify position errors of individual tracks.

  7. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  8. Estimation of Nonlinear Functions of State Vector for Linear Systems with Time-Delays and Uncertainties

    Directory of Open Access Journals (Sweden)

    Il Young Song

    2015-01-01

    Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.

  9. On the influence of uncertainties in estimating risk aversion and working interest

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1996-01-01

    The influence of uncertainties in costs, value, success probability, risk tolerance and mandated working interest are evaluated for their impact on assessing probable ranges of uncertainty on risk adjusted value, RAV, using different models. The relative importance of different factors in contributing to the uncertainty in RAV is analyzed, as is the influence of different probability distributions for the intrinsic variables entering the RAV model formulae. Numerical illustrations indicate how the RAV probabilities depend not only on the model functions (Cozzolino, hyperbolic tangent) used to provide RAV estimates, but also on the intrinsic shapes of the probability distributions from which are drawn input parameter values for Monte Carlo simulations. In addition, a mandated range of working interest can be addressed as an extra variable contributing to the probabilistic range of RAV; while negative RAV values for high-cost project can be used to assess the probable buy-out amount one should be prepared to pay depending on corporate risk philosophy. Also, the procedures illustrate how the relative contributions of scientific factors influence uncertainty of reserve assessments, allowing one to determine where to concentrate effort to improve the ranges of uncertainty. (Author)

  10. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, H [Capital Medical University, Beijing, Beijing (China); Chen, Z [Yale New Haven Hospital, New Haven, CT (United States); Nath, R; Liu, W [Yale University School of Medicine, New Haven, CT (United States)

    2016-06-15

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  11. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    International Nuclear Information System (INIS)

    Yan, H; Chen, Z; Nath, R; Liu, W

    2016-01-01

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  12. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    Science.gov (United States)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique

  13. Estimation of the thermal diffusion coefficient in fusion plasmas taking frequency measurement uncertainties into account

    International Nuclear Information System (INIS)

    Van Berkel, M; Hogeweij, G M D; Van den Brand, H; De Baar, M R; Zwart, H J; Vandersteen, G

    2014-01-01

    In this paper, the estimation of the thermal diffusivity from perturbative experiments in fusion plasmas is discussed. The measurements used to estimate the thermal diffusivity suffer from stochastic noise. Accurate estimation of the thermal diffusivity should take this into account. It will be shown that formulas found in the literature often result in a thermal diffusivity that has a bias (a difference between the estimated value and the actual value that remains even if more measurements are added) or have an unnecessarily large uncertainty. This will be shown by modeling a plasma using only diffusion as heat transport mechanism and measurement noise based on ASDEX Upgrade measurements. The Fourier coefficients of a temperature perturbation will exhibit noise from the circular complex normal distribution (CCND). Based on Fourier coefficients distributed according to a CCND, it is shown that the resulting probability density function of the thermal diffusivity is an inverse non-central chi-squared distribution. The thermal diffusivity that is found by sampling this distribution will always be biased, and averaging of multiple estimated diffusivities will not necessarily improve the estimation. Confidence bounds are constructed to illustrate the uncertainty in the diffusivity using several formulas that are equivalent in the noiseless case. Finally, a different method of averaging, that reduces the uncertainty significantly, is suggested. The methodology is also extended to the case where damping is included, and it is explained how to include the cylindrical geometry. (paper)

  14. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    Science.gov (United States)

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  15. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardgree, Stuart; Strand, Eva

    2013-01-01

    Woody encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p 2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 – 143.6 kg and 0.5 – 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  16. Estimating U.S. Methane Emissions from the Natural Gas Supply Chain. Approaches, Uncertainties, Current Estimates, and Future Studies

    Energy Technology Data Exchange (ETDEWEB)

    Heath, Garvin [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Warner, Ethan [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Steinberg, Daniel [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Brandt, Adam [Stanford Univ., CA (United States)

    2015-08-01

    A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH4 emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain.

  17. Estimation of high-pT Jet Energy Scale Uncertainty from single hadron response with the ATLAS detector

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00534683; The ATLAS collaboration

    2016-01-01

    The jet energy scale (JES) uncertainty is estimated using different methods at different pT ranges. In situ techniques exploiting the pT balance between a jet and a reference object (e.g. Z or gamma) are used at lower pT, but at very high pT (> 2.5 TeV) there is not enough statistics for in-situ techniques. The JES uncertainty at high-pT is important in several searches for new phenomena, e.g. the dijet resonance and angular searches. In the highest pT range, the JES uncertainty is estimated using the calorimeter response to single hadrons. In this method, jets are treated as a superposition of energy depositions of single particles. An uncertainty is applied to each energy depositions belonging to the particles within the jet, and propagated to the final jet energy scale. This poster presents the JES uncertainty found with this method at sqrt(s) = 8 TeV and its developments.

  18. Sensitivity of process design to uncertainties in property estimates applied to extractive distillation

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Hukkerikar, Amol; Sin, Gürkan

    thermodynamic and thermo-physical models is critical to obtain a feasible and operable process design and many guidelines pertaining to this can be found in the literature. But even if appropriate models have been chosen, the user needs to keep in mind that these models contain uncertainties which may propagate...... through the calculation steps to such an extent that the final design might not be feasible or lead to poor performance. Therefore it is necessary to evaluate the sensitivity of process design to the uncertainties in property estimates obtained from thermo-physical property models. Uncertainty...... of the methodology is illustrated using a case study of extractive distillation in which acetone is separated from methanol using water as a solvent. Among others, the vapour pressure of acetone and water was found to be the most critical and even small uncertainties from -0.25 % to +0.75 % in vapour pressure data...

  19. Methodology for uncertainty estimation of Hanford tank chemical and radionuclide inventories and concentrations

    International Nuclear Information System (INIS)

    Chen, G.; Ferryman, T.A.; Remund, K.M.

    1998-02-01

    The exact physical and chemical nature of 55 million gallons of toxic waste held in 177 underground waste tanks at the Hanford Site is not known with sufficient detail to support the safety, retrieval, and immobilization missions presented to Hanford. The Hanford Best Basis team has made point estimates of the inventories in each tank. The purpose of this study is to estimate probability distributions for each of the 71 analytes and 177 tanks that the Hanford Best Basis team has made point estimates for. This will enable uncertainty intervals to be calculated for the Best Basis inventories and should facilitate the safety, retrieval, and immobilization missions. Section 2 of this document describes the overall approach used to estimate tank inventory uncertainties. Three major components are considered in this approach: chemical concentration, density, and waste volume. Section 2 also describes the two different methods used to evaluate the tank wastes in terms of sludges and in terms of supernatant or saltcakes. Sections 3 and 4 describe in detail the methodology to assess the probability distributions for each of the three components, as well as the data sources for implementation. The conclusions are given in Section 5

  20. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    Science.gov (United States)

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    Science.gov (United States)

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  2. Implicit Treatment of Technical Specification and Thermal Hydraulic Parameter Uncertainties in Gaussian Process Model to Estimate Safety Margin

    Directory of Open Access Journals (Sweden)

    Douglas A. Fynan

    2016-06-01

    Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.

  3. Best estimate analysis of LOFT L2-5 with CATHARE: uncertainty and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    JOUCLA, Jerome; PROBST, Pierre [Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); FOUET, Fabrice [APTUS, Versailles (France)

    2008-07-01

    The revision of the 10 CFR50.46 in 1988 has made possible the use of best-estimate codes. They may be used in safety demonstration and licensing, provided that uncertainties are added to the relevant output parameters before comparing them with the acceptance criteria. In the safety analysis of the large break loss of coolant accident, it was agreed that the 95. percentile estimated with a high degree of confidence should be lower than the acceptance criteria. It appeared necessary to IRSN, technical support of the French Safety Authority, to get more insight into these strategies which are being developed not only in thermal-hydraulics but in other fields such as in neutronics. To estimate the 95. percentile with a high confidence level, we propose to use rank statistics or bootstrap. Toward the objective of assessing uncertainty, it is useful to determine and to classify the main input parameters. We suggest approximating the code by a surrogate model, the Kriging model, which will be used to make a sensitivity analysis with the SOBOL methodology. This paper presents the application of two new methodologies of how to make the uncertainty and sensitivity analysis on the maximum peak cladding temperature of the LOFT L2-5 test with the CATHARE code. (authors)

  4. Uncertainties of estimating average radon and radon decay product concentrations in occupied houses

    International Nuclear Information System (INIS)

    Ronca-Battista, M.; Magno, P.; Windham, S.

    1986-01-01

    Radon and radon decay product measurements made in up to 68 Butte, Montana homes over a period of 18 months were used to estimate the uncertainty in estimating long-term average radon and radon decay product concentrations from a short-term measurement. This analysis was performed in support of the development of radon and radon decay product measurement protocols by the Environmental Protection Agency (EPA). The results of six measurement methods were analyzed: continuous radon and working level monitors, radon progeny integrating sampling units, alpha-track detectors, and grab radon and radon decay product techniques. Uncertainties were found to decrease with increasing sampling time and to be smaller when measurements were conducted during the winter months. In general, radon measurements had a smaller uncertainty than radon decay product measurements. As a result of this analysis, the EPA measurements protocols specify that all measurements be made under closed-house (winter) conditions, and that sampling times of at least a 24 hour period be used when the measurement will be the basis for a decision about remedial action or long-term health risks. 13 references, 3 tables

  5. Considering sampling strategy and cross-section complexity for estimating the uncertainty of discharge measurements using the velocity-area method

    Science.gov (United States)

    Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine

    2016-02-01

    Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the

  6. Cost Implications of Uncertainty in CO{sub 2} Storage Resource Estimates: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Steven T., E-mail: sanderson@usgs.gov [National Center, U.S. Geological Survey (United States)

    2017-04-15

    Carbon capture from stationary sources and geologic storage of carbon dioxide (CO{sub 2}) is an important option to include in strategies to mitigate greenhouse gas emissions. However, the potential costs of commercial-scale CO{sub 2} storage are not well constrained, stemming from the inherent uncertainty in storage resource estimates coupled with a lack of detailed estimates of the infrastructure needed to access those resources. Storage resource estimates are highly dependent on storage efficiency values or storage coefficients, which are calculated based on ranges of uncertain geological and physical reservoir parameters. If dynamic factors (such as variability in storage efficiencies, pressure interference, and acceptable injection rates over time), reservoir pressure limitations, boundaries on migration of CO{sub 2}, consideration of closed or semi-closed saline reservoir systems, and other possible constraints on the technically accessible CO{sub 2} storage resource (TASR) are accounted for, it is likely that only a fraction of the TASR could be available without incurring significant additional costs. Although storage resource estimates typically assume that any issues with pressure buildup due to CO{sub 2} injection will be mitigated by reservoir pressure management, estimates of the costs of CO{sub 2} storage generally do not include the costs of active pressure management. Production of saline waters (brines) could be essential to increasing the dynamic storage capacity of most reservoirs, but including the costs of this critical method of reservoir pressure management could increase current estimates of the costs of CO{sub 2} storage by two times, or more. Even without considering the implications for reservoir pressure management, geologic uncertainty can significantly impact CO{sub 2} storage capacities and costs, and contribute to uncertainty in carbon capture and storage (CCS) systems. Given the current state of available information and the

  7. Best-estimate analysis and decision making under uncertainty

    International Nuclear Information System (INIS)

    Orechwa, Y.

    2004-01-01

    In many engineering analyses of system safety the traditional reliance on conservative evaluation model calculations is being replaced with so called best-estimate analysis. These best-estimate analyses differentiate themselves from the traditional conservative analyses through two ingredients, namely realistic models and an account of the residual uncertainty associated with the model calculations. Best-estimate analysis, in the context of this paper, refers to the numerical evaluation of system properties of interest in situations where direct confirmatory measurements are not feasible. A decision with regard to the safety of the system is then made based on the computed numerical values of the system properties of interest. These situations generally arise in the design of systems that require computed and generally nontrivial extrapolations from the available data. In the case of nuclear reactors, examples are criticality of spent fuel pools, neutronic parameters of new advanced designs where insufficient material is available for mockup critical experiments and, the large break loss of coolant accident (LOCA). In this paper the case of LOCA, is taken to discuss the best-estimate analysis and decision making. Central to decision making is information. Thus, of interest is the source, quantity and quality of the information obtained in a best-estimate analysis, and used to define the acceptance criteria and to formulate a decision rule. This in effect expands the problem from the calculation of a conservative margin to a predefined acceptance criterion, to the formulation of a consistent decision rule and the computation of a test statistic for application of the decision rule. The latter view is a necessary condition for developing risk informed decision rules, and, thus, the relation between design basis analysis criteria and probabilistic risk assessment criteria is key. The discussion is in the context of making a decision under uncertainty for a reactor

  8. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  9. Plurality of Type A evaluations of uncertainty

    Science.gov (United States)

    Possolo, Antonio; Pintar, Adam L.

    2017-10-01

    The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.

  10. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardegree, Stuart; Strand, Eva

    2013-07-01

    encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p < 0.001, RMSE = 0.58 kg). The predicted mean aboveground woody carbon storage for the study area was 677 g/m2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 - 143.6 kg and 0.5 - 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  11. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali

    2013-05-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.

  12. Uncertainty analysis guide

    International Nuclear Information System (INIS)

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  13. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  14. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  15. Metamodel for Efficient Estimation of Capacity-Fade Uncertainty in Li-Ion Batteries for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jaewook Lee

    2015-06-01

    Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.

  16. Importance of tree basic density in biomass estimation and associated uncertainties

    DEFF Research Database (Denmark)

    Njana, Marco Andrew; Meilby, Henrik; Eid, Tron

    2016-01-01

    Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground...... of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes...... and examine uncertainties in estimation of tree biomass using indirect methods. Methods This study focused on three dominant mangrove species (Avicennia marina (Forssk.) Vierh, Sonneratia alba J. Smith and Rhizophora mucronata Lam.) in Tanzania. A total of 120 trees were destructively sampled for aboveground...

  17. Quantifying Uncertainty in Estimation of Potential Recharge in Tropical and Temperate Catchments using a Crop Model and Microwave Remote Sensing

    Science.gov (United States)

    Krishnan Kutty, S.; Sekhar, M.; Ruiz, L.; Tomer, S. K.; Bandyopadhyay, S.; Buis, S.; Guerif, M.; Gascuel-odoux, C.

    2012-12-01

    Groundwater recharge in a semi-arid region is generally low, but could exhibit high spatial variability depending on the soil type and plant cover. The potential recharge (the drainage flux just beneath the root zone) is found to be sensitive to water holding capacity and rooting depth (Rushton, 2003). Simple water balance approaches for recharge estimation often fail to consider the effect of plant cover, growth phases and rooting depth. Hence a crop model based approach might be better suited to assess sensitivity of recharge for various crop-soil combinations in agricultural catchments. Martinez et al. (2009) using a root zone modelling approach to estimate groundwater recharge stressed that future studies should focus on quantifying the uncertainty in recharge estimates due to uncertainty in soil water parameters such as soil layers, field capacity, rooting depth etc. Uncertainty in the parameters may arise due to the uncertainties in retrieved variables (surface soil moisture and leaf area index) from satellite. Hence a good estimate of parameters as well as their uncertainty is essential for a reliable estimate of the potential recharge. In this study we focus on assessing the sensitivity of crop and soil types on the potential recharge by using a generic crop model STICS. The effect of uncertainty in the soil parameters on the estimates of recharge and its uncertainty is investigated. The multi-layer soil water parameters and their uncertainty is estimated by inversion of STICS model using the GLUE approach. Surface soil moisture and LAI either retrieved from microwave remote sensing data or measured in field plots (Sreelash et al., 2012) were found to provide good estimates of the soil water properties and therefore both these data sets were used in this study to estimate the parameters and the potential recharge for a combination of soil-crop systems. These investigations were made in two field experimental catchments. The first one is in the tropical semi

  18. Inferring uncertainty from interval estimates: Effects of alpha level and numeracy

    Directory of Open Access Journals (Sweden)

    Luke F. Rinne

    2013-05-01

    Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.

  19. Comparing welfare estimates across stated preference and uncertainty elicitation formats for air quality improvements in Nairobi, Kenya

    NARCIS (Netherlands)

    Ndambiri, H.; Brouwer, R.; Mungatana, E.

    2016-01-01

    The effect of preference uncertainty on estimated willingness to pay (WTP) is examined using identical payment cards and alternative uncertainty elicitation procedures in three split samples, focusing on air quality improvement in Nairobi. The effect of the stochastic payment card (SPC) and

  20. Uncertainty analysis in vulnerability estimations for elements at risk- a review of concepts and some examples on landslides

    Science.gov (United States)

    Ciurean, R. L.; Glade, T.

    2012-04-01

    Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.

  1. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  2. Estimation of Peaking Factor Uncertainty due to Manufacturing Tolerance using Statistical Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.

  3. Estimated SAGE II ozone mixing ratios in early 1993 and comparisons with Stratospheric Photochemistry, Aerosols and Dynamic Expedition measurements

    Science.gov (United States)

    Yue, G. K.; Veiga, R. E.; Poole, L. R.; Zawodny, J. M.; Proffitt, M. H.

    1994-01-01

    An empirical time-series model for estimating ozone mixing ratios based on Stratospheric Aerosols and Gas Experiment II (SAGE II) monthly mean ozone data for the period October 1984 through June 1991 has been developed. The modeling results for ozone mixing ratios in the 10- to 30- km region in early months of 1993 are presented. In situ ozone profiles obtained by a dual-beam UV-absorption ozone photometer during the Stratospheric Photochemistry, Aerosols and Dynamics Expedition (SPADE) campaign, May 1-14, 1993, are compared with the model results. With the exception of two profiles at altitudes below 16 km, ozone mixing ratios derived by the model and measured by the ozone photometer are in relatively good agreement within their individual uncertainties. The identified discrepancies in the two profiles are discussed.

  4. Uncertainty of Volatility Estimates from Heston Greeks

    Directory of Open Access Journals (Sweden)

    Oliver Pfante

    2018-01-01

    Full Text Available Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.

  5. Estimating the Uncertainty of Tensile Strength Measurement for A Photocured Material Produced by Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Adamczak Stanisław

    2014-08-01

    Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.

  6. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  7. Group-Contribution based Property Estimation and Uncertainty analysis for Flammability-related Properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95......%-confidence interval of the prediction. Compared to existing models the developed ones have a higher accuracy, are simple to apply and provide uncertainty information on the calculated prediction. The average relative error and correlation coefficient are 11.5% and 0.99 for LFL, 15.9% and 0.91 for UFL, 2...

  8. Uncertainty estimation of predictions of peptides' chromatographic retention times in shotgun proteomics.

    Science.gov (United States)

    Maboudi Afkham, Heydar; Qiu, Xuanbin; The, Matthew; Käll, Lukas

    2017-02-15

    Liquid chromatography is frequently used as a means to reduce the complexity of peptide-mixtures in shotgun proteomics. For such systems, the time when a peptide is released from a chromatography column and registered in the mass spectrometer is referred to as the peptide's retention time . Using heuristics or machine learning techniques, previous studies have demonstrated that it is possible to predict the retention time of a peptide from its amino acid sequence. In this paper, we are applying Gaussian Process Regression to the feature representation of a previously described predictor E lude . Using this framework, we demonstrate that it is possible to estimate the uncertainty of the prediction made by the model. Here we show how this uncertainty relates to the actual error of the prediction. In our experiments, we observe a strong correlation between the estimated uncertainty provided by Gaussian Process Regression and the actual prediction error. This relation provides us with new means for assessment of the predictions. We demonstrate how a subset of the peptides can be selected with lower prediction error compared to the whole set. We also demonstrate how such predicted standard deviations can be used for designing adaptive windowing strategies. lukas.kall@scilifelab.se. Our software and the data used in our experiments is publicly available and can be downloaded from https://github.com/statisticalbiotechnology/GPTime . © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Estimation of uncertainty in TLD calibration

    International Nuclear Information System (INIS)

    Hasabelrasoul, H. A.

    2013-07-01

    In this study thermoluminescence dosimeter TLD was use of individual control devices to make sure the quality assurance and quality control in individual monitoring. The uncertainty measured in reader calibration coefficients for tow reader and uncertainty in radiation dose after irradiate in SSDL laboratory. Fifty sample was selected for the study was placed in the oven at a temperature of 400 for an hour to get zero or background and took zero count by or background and took zero count by reader (1) and reader (2) and then irradiate in SSDL by cesium-137 at a dose of 5 mGy and laid back in the oven at degrees 100 and degrees 10 minutes, to 10 chips for calibration and readout count by reader one and reader two. The RCF was found for each reader above 1.47 and 1.11, respectively, and found the uncertainty RCF was found for each reader above 1.47 and 1.11, respectively, and found the uncertainly RCF 0.430629 and 0.431973. Radiation dose was measured for fifty samples irradiate to dose of 5 mGy and read the count by reader 1 and reader 2 the uncertainty was found for each reader 0.490446 and 0.587602.(Author)

  10. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  11. Assessment of uncertainties in soil erosion and sediment yield estimates at ungauged basins: an application to the Garra River basin, India

    Science.gov (United States)

    Swarnkar, Somil; Malini, Anshu; Tripathi, Shivam; Sinha, Rajiv

    2018-04-01

    High soil erosion and excessive sediment load are serious problems in several Himalayan river basins. To apply mitigation procedures, precise estimation of soil erosion and sediment yield with associated uncertainties are needed. Here, the revised universal soil loss equation (RUSLE) and the sediment delivery ratio (SDR) equations are used to estimate the spatial pattern of soil erosion (SE) and sediment yield (SY) in the Garra River basin, a small Himalayan tributary of the River Ganga. A methodology is proposed for quantifying and propagating uncertainties in SE, SDR and SY estimates. Expressions for uncertainty propagation are derived by first-order uncertainty analysis, making the method viable even for large river basins. The methodology is applied to investigate the relative importance of different RUSLE factors in estimating the magnitude and uncertainties in SE over two distinct morphoclimatic regimes of the Garra River basin, namely the upper mountainous region and the lower alluvial plains. Our results suggest that average SE in the basin is very high (23 ± 4.7 t ha-1 yr-1) with higher values in the upper mountainous region (92 ± 15.2 t ha-1 yr-1) compared to the lower alluvial plains (19.3 ± 4 t ha-1 yr-1). Furthermore, the topographic steepness (LS) and crop practice (CP) factors exhibit higher uncertainties than other RUSLE factors. The annual average SY is estimated at two locations in the basin - Nanak Sagar Dam (NSD) for the period 1962-2008 and Husepur gauging station (HGS) for 1987-2002. The SY at NSD and HGS are estimated to be 6.9 ± 1.2 × 105 t yr-1 and 6.7 ± 1.4 × 106 t yr-1, respectively, and the estimated 90 % interval contains the observed values of 6.4 × 105 t yr-1 and 7.2 × 106 t yr-1, respectively. The study demonstrated the usefulness of the proposed methodology for quantifying uncertainty in SE and SY estimates at ungauged basins.

  12. Uncertainty Estimation of Shear-wave Velocity Structure from Bayesian Inversion of Microtremor Array Dispersion Data

    Science.gov (United States)

    Dosso, S. E.; Molnar, S.; Cassidy, J.

    2010-12-01

    Bayesian inversion of microtremor array dispersion data is applied, with evaluation of data errors and model parameterization, to produce the most-probable shear-wave velocity (VS) profile together with quantitative uncertainty estimates. Generally, the most important property characterizing earthquake site response is the subsurface VS structure. The microtremor array method determines phase velocity dispersion of Rayleigh surface waves from multi-instrument recordings of urban noise. Inversion of dispersion curves for VS structure is a non-unique and nonlinear problem such that meaningful evaluation of confidence intervals is required. Quantitative uncertainty estimation requires not only a nonlinear inversion approach that samples models proportional to their probability, but also rigorous estimation of the data error statistics and an appropriate model parameterization. A Bayesian formulation represents the solution of the inverse problem in terms of the posterior probability density (PPD) of the geophysical model parameters. Markov-chain Monte Carlo methods are used with an efficient implementation of Metropolis-Hastings sampling to provide an unbiased sample from the PPD to compute parameter uncertainties and inter-relationships. Nonparametric estimation of a data error covariance matrix from residual analysis is applied with rigorous a posteriori statistical tests to validate the covariance estimate and the assumption of a Gaussian error distribution. The most appropriate model parameterization is determined using the Bayesian information criterion (BIC), which provides the simplest model consistent with the resolving power of the data. Parameter uncertainties are found to be under-estimated when data error correlations are neglected and when compressional-wave velocity and/or density (nuisance) parameters are fixed in the inversion. Bayesian inversion of microtremor array data is applied at two sites in British Columbia, the area of highest seismic risk in

  13. Validation and uncertainty estimation of fast neutron activation analysis method for Cu, Fe, Al, Si elements in sediment samples

    International Nuclear Information System (INIS)

    Sunardi; Samin Prihatin

    2010-01-01

    Validation and uncertainty estimation of Fast Neutron Activation Analysis (FNAA) method for Cu, Fe, Al, Si elements in sediment samples has been conduced. The aim of the research is to confirm whether FNAA method is still matches to ISO/lEC 17025-2005 standard. The research covered the verification, performance, validation of FNM and uncertainty estimation. Standard of SRM 8704 and sediments were weighted for certain weight and irradiated with 14 MeV fast neutron and then counted using gamma spectrometry. The result of validation method for Cu, Fe, Al, Si element showed that the accuracy were in the range of 95.89-98.68 %, while the precision were in the range 1.13-2.29 %. The result of uncertainty estimation for Cu, Fe, Al, and Si were 2.67, 1.46, 1.71 and 1.20 % respectively. From this data, it can be concluded that the FNM method is still reliable and valid for element contents analysis in samples, because the accuracy is up to 95 % and the precision is under 5 %, while the uncertainty are relatively small and suitable for the range 95 % level of confidence where the uncertainty maximum is 5 %. (author)

  14. Ideas underlying the Quantification of Margins and Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Pilch, Martin, E-mail: mpilch@sandia.gov [Department 1514, Sandia National Laboratories, Albuquerque, NM 87185-0828 (United States); Trucano, Timothy G. [Department 1411, Sandia National Laboratories, Albuquerque, NM 87185-0370 (United States); Helton, Jon C. [Department of Mathematics and Statistics, Arizona State University, Tempe, AZ 85287-1804 (United States)

    2011-09-15

    Key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions are described. While QMU is a broad process and methodology for generating critical technical information to be used in U.S. nuclear weapon stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, the following topics are discussed: (i) the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, (ii) the need to separate aleatory and epistemic uncertainty in QMU, and (iii) the properties of risk-informed decision making (RIDM) that are best suited for effective application of QMU. The paper is written at a high level, but provides an extensive bibliography of useful papers for interested readers to deepen their understanding of the presented ideas.

  15. Uncertainties in Early-Stage Capital Cost Estimation of Process Design – A Case Study on Biorefinery Design

    International Nuclear Information System (INIS)

    Cheali, Peam; Gernaey, Krist V.; Sin, Gürkan

    2015-01-01

    Capital investment, next to the product demand, sales, and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early-stage design is a challenging task, which is especially relevant in biorefinery research where information about new technologies and experience with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs: (a) bootstrapping as a regression method when cost data are available; and, (b) the Monte Carlo technique as an error propagation method based on expert input when cost data are not available. Four well-known models for early-stage cost estimation are reviewed and analyzed using the methodology. The significance of uncertainties of cost data for early-stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is indeed found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision-making under uncertainties. One of the results using order-of-magnitude estimates shows that the production of diethyl ether and 1,3-butadiene are the most promising with the lowest economic risks (among the alternatives considered) of 0.24 MM$/a and 4.6 MM$/a, respectively.

  16. Uncertainties in Early-Stage Capital Cost Estimation of Process Design – A Case Study on Biorefinery Design

    Energy Technology Data Exchange (ETDEWEB)

    Cheali, Peam; Gernaey, Krist V.; Sin, Gürkan, E-mail: gsi@kt.dtu.dk [Department of Chemical and Biochemical Engineering, Technical University of Denmark, Lyngby (Denmark)

    2015-02-06

    Capital investment, next to the product demand, sales, and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early-stage design is a challenging task, which is especially relevant in biorefinery research where information about new technologies and experience with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs: (a) bootstrapping as a regression method when cost data are available; and, (b) the Monte Carlo technique as an error propagation method based on expert input when cost data are not available. Four well-known models for early-stage cost estimation are reviewed and analyzed using the methodology. The significance of uncertainties of cost data for early-stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is indeed found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision-making under uncertainties. One of the results using order-of-magnitude estimates shows that the production of diethyl ether and 1,3-butadiene are the most promising with the lowest economic risks (among the alternatives considered) of 0.24 MM$/a and 4.6 MM$/a, respectively.

  17. GUM approach to uncertainty estimations for online 220Rn concentration measurements using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Sathyabama, N.

    2014-01-01

    It is now widely recognized that, when all of the known or suspected components of errors have been evaluated and corrected, there still remains an uncertainty, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured. Evaluation of measurement data - Guide to the expression of Uncertainty in Measurement (GUM) is a guidance document, the purpose of which is to promote full information on how uncertainty statements are arrived at and to provide a basis for the international comparison of measurement results. In this paper, uncertainty estimations following GUM guidelines have been made for the measured values of online thoron concentrations using Lucas scintillation cell to prove that the correction for disequilibrium between 220 Rn and 216 Po is significant in online 220 Rn measurements

  18. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  19. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  20. Quantifying Surface Energy Flux Estimation Uncertainty Using Land Surface Temperature Observations

    Science.gov (United States)

    French, A. N.; Hunsaker, D.; Thorp, K.; Bronson, K. F.

    2015-12-01

    Remote sensing with thermal infrared is widely recognized as good way to estimate surface heat fluxes, map crop water use, and detect water-stressed vegetation. When combined with net radiation and soil heat flux data, observations of sensible heat fluxes derived from surface temperatures (LST) are indicative of instantaneous evapotranspiration (ET). There are, however, substantial reasons LST data may not provide the best way to estimate of ET. For example, it is well known that observations and models of LST, air temperature, or estimates of transport resistances may be so inaccurate that physically based model nevertheless yield non-meaningful results. Furthermore, using visible and near infrared remote sensing observations collected at the same time as LST often yield physically plausible results because they are constrained by less dynamic surface conditions such as green fractional cover. Although sensitivity studies exist that help identify likely sources of error and uncertainty, ET studies typically do not provide a way to assess the relative importance of modeling ET with and without LST inputs. To better quantify model benefits and degradations due to LST observational inaccuracies, a Bayesian uncertainty study was undertaken using data collected in remote sensing experiments at Maricopa, Arizona. Visible, near infrared and thermal infrared data were obtained from an airborne platform. The prior probability distribution of ET estimates were modeled using fractional cover, local weather data and a Penman-Monteith mode, while the likelihood of LST data was modeled from a two-source energy balance model. Thus the posterior probabilities of ET represented the value added by using LST data. Results from an ET study over cotton grown in 2014 and 2015 showed significantly reduced ET confidence intervals when LST data were incorporated.

  1. Estimation of uncertainty in tracer gas measurement of air change rates.

    Science.gov (United States)

    Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio

    2010-12-01

    Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.

  2. Developing first time-series of land surface temperature from AATSR with uncertainty estimates

    Science.gov (United States)

    Ghent, Darren; Remedios, John

    2013-04-01

    Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3

  3. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  4. Estimate of uncertainties correlated and no correlated associated to performance tests of activity meters

    International Nuclear Information System (INIS)

    Sousa, C.H.S.; Teixeira, G.J.; Peixoto, J.G.P.

    2014-01-01

    Activimeters should undergo performance for verifying the functionality tests as technical recommendations. This study estimated the associated expanded uncertainties uncorrelated to the results conducted on three instruments, two detectors with ionization chamber and one with Geiger Mueller tubes. For this we used a standard reference source and screened certified by the National Institute of Technology and Standardization. The methodology of this research was based on the protocols listed in the technical document of the International Atomic Energy Agency. Later two quantities were correlated presenting real correlation and improving expanded uncertainty 3.7%. (author)

  5. Estimation of environment-related properties of chemicals for design of sustainable processes: Development of group-contribution+ (GC+) models and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent

    2012-01-01

    The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...

  6. DS02 uncertainty analysis

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  7. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  8. Balancing uncertainty of context in ERP project estimation: an approach and a case study

    NARCIS (Netherlands)

    Daneva, Maia

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -

  9. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  10. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  11. An estimation of reactor thermal power uncertainty using UFM-based feedwater flow rate in nuclear power plants

    International Nuclear Information System (INIS)

    Byung Ryul Jung; Ho Cheol Jang; Byung Jin Lee; Se Jin Baik; Woo Hyun Jang

    2005-01-01

    Most of Pressurized Water Reactors (PWRs) utilize the venturi meters (VMs) to measure the feedwater (FW) flow rate to the steam generator in the calorimetric measurement, which is used in the reactor thermal power (RTP) estimation. However, measurement drifts have been experienced due to some anomalies on the venturi meter (generally called the venturi meter fouling). The VM's fouling tends to increase the measured pressure drop across the meter, which results in indication of increased feedwater flow rate. Finally, the reactor thermal power is overestimated and the actual reactor power is to be reduced to remain within the regulatory limits. To overcome this VM's fouling problem, the Ultrasonic Flow Meter (UFM) has recently been gaining attention in the measurement of the feedwater flow rate. This paper presents the applicability of a UFM based feedwater flow rate in the estimation of reactor thermal power uncertainty. The FW and RTP uncertainties are compared in terms of sensitivities between the VM- and UFM-based feedwater flow rates. Data from typical Optimized Power Reactor 1000 (OPR1000) plants are used to estimate the uncertainty. (authors)

  12. Metatarsalgia located by synovitis and uncertainty of the articulation metatarsus-phalanges of the II toe

    International Nuclear Information System (INIS)

    Gerstner G, Juan Bernardo

    2002-01-01

    The synovitis and the uncertainty of the articulation metatarsus-phalanges (MP) of the II toe they are the causes more frequent of metatersalgia located in this articulation of the foot, frequently bad diagnosed and not well managed by the general orthopedist. The natural history understands stadiums so precocious as the synovitis without alteration of peri-articular structures, going by the frank uncertainty, and finishing with the angular deformities and the complete luxation of the articulation MP. The meticulous and directed interrogation, the physical exam specifies and the classification of the diagnostic they are the keys for the successful handling of the pathology. The surgical correction of this condition should always be associated to the correction of associate deformities as the hallux valgus and the fingers in claw

  13. Quantifying type I and type II errors in decision-making under uncertainty : The case of GM crops

    NARCIS (Netherlands)

    Ansink, Erik; Wesseler, Justus

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308-323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  14. Quantifying type I and type II errors in decision-making under uncertainty: the case of GM crops

    NARCIS (Netherlands)

    Ansink, E.J.H.; Wesseler, J.H.H.

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308¿323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  15. UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.

    Science.gov (United States)

    Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois

    2018-03-01

    Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.

  16. Forensic Entomology: Evaluating Uncertainty Associated With Postmortem Interval (PMI) Estimates With Ecological Models.

    Science.gov (United States)

    Faris, A M; Wang, H-H; Tarone, A M; Grant, W E

    2016-05-31

    Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. State Estimation for Sensor Monitoring System with Uncertainty and Disturbance

    Directory of Open Access Journals (Sweden)

    Jianhong Sun

    2014-10-01

    Full Text Available This paper considers the state estimation problem for the sensor monitoring system which contains system uncertainty and nonlinear disturbance. In the sensor monitoring system, states of each inner sensor node usually contains system uncertainty, and external noise often works as nonlinear item. Besides, information transmission in the system is also time consuming. All mentioned above may arouse in unstable of the monitoring system. In this case, states of sensors could be wrongly sampled. Under this circumstance, a proper mathematical model is proposed and by the use of Lipschitz condition, the nonlinear item is transformed to linear one. In addition, we suppose that all sensor nodes are distributed arranged, no interface occurs with each other. By establishing proper Lyapunov– Krasovskii functional, sufficient conditions are acquired by solving linear matrix inequality to make the error augmented system stable, and the gains of observers are also derived. Finally, an illustrated example is given to show that system observed value tracks system states well, which fully demonstrate the effectiveness of our result.

  18. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  19. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  20. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  1. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  2. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  3. Nonlinear parameter estimation in inviscid compressible flows in presence of uncertainties

    International Nuclear Information System (INIS)

    Jemcov, A.; Mathur, S.

    2004-01-01

    The focus of this paper is on the formulation and solution of inverse problems of parameter estimation using algorithmic differentiation. The inverse problem formulated here seeks to determine the input parameters that minimize a least squares functional with respect to certain target data. The formulation allows for uncertainty in the target data by considering the least squares functional in a stochastic basis described by the covariance of the target data. Furthermore, to allow for robust design, the formulation also accounts for uncertainties in the input parameters. This is achieved using the method of propagation of uncertainties using the directional derivatives of the output parameters with respect to unknown parameters. The required derivatives are calculated simultaneously with the solution using generic programming exploiting the template and operator overloading features of the C++ language. The methodology described here is general and applicable to any numerical solution procedure for any set of governing equations but for the purpose of this paper we consider a finite volume solution of the compressible Euler equations. In particular, we illustrate the method for the case of supersonic flow in a duct with a wedge. The parameter to be determined is the inlet Mach number and the target data is the axial component of velocity at the exit of the duct. (author)

  4. Assessment of uncertainties in soil erosion and sediment yield estimates at ungauged basins: an application to the Garra River basin, India

    Directory of Open Access Journals (Sweden)

    S. Swarnkar

    2018-04-01

    Full Text Available High soil erosion and excessive sediment load are serious problems in several Himalayan river basins. To apply mitigation procedures, precise estimation of soil erosion and sediment yield with associated uncertainties are needed. Here, the revised universal soil loss equation (RUSLE and the sediment delivery ratio (SDR equations are used to estimate the spatial pattern of soil erosion (SE and sediment yield (SY in the Garra River basin, a small Himalayan tributary of the River Ganga. A methodology is proposed for quantifying and propagating uncertainties in SE, SDR and SY estimates. Expressions for uncertainty propagation are derived by first-order uncertainty analysis, making the method viable even for large river basins. The methodology is applied to investigate the relative importance of different RUSLE factors in estimating the magnitude and uncertainties in SE over two distinct morphoclimatic regimes of the Garra River basin, namely the upper mountainous region and the lower alluvial plains. Our results suggest that average SE in the basin is very high (23 ± 4.7 t ha−1 yr−1 with higher values in the upper mountainous region (92 ± 15.2 t ha−1 yr−1 compared to the lower alluvial plains (19.3 ± 4 t ha−1 yr−1. Furthermore, the topographic steepness (LS and crop practice (CP factors exhibit higher uncertainties than other RUSLE factors. The annual average SY is estimated at two locations in the basin – Nanak Sagar Dam (NSD for the period 1962–2008 and Husepur gauging station (HGS for 1987–2002. The SY at NSD and HGS are estimated to be 6.9 ± 1.2  ×  105 t yr−1 and 6.7 ± 1.4  ×  106 t yr−1, respectively, and the estimated 90 % interval contains the observed values of 6.4  ×  105 t yr−1 and 7.2  ×  106 t yr−1, respectively. The study demonstrated the usefulness of the proposed methodology for quantifying uncertainty in SE and

  5. Handling uncertainty and networked structure in robot control

    CERN Document Server

    Tamás, Levente

    2015-01-01

    This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...

  6. Uncertainty in recharge estimation: impact on groundwater vulnerability assessments for the Pearl Harbor Basin, O'ahu, Hawai'i, U.S.A.

    Science.gov (United States)

    Giambelluca, Thomas W.; Loague, Keith; Green, Richard E.; Nullet, Michael A.

    1996-06-01

    In this paper, uncertainty in recharge estimates is investigated relative to its impact on assessments of groundwater contamination vulnerability using a relatively simple pesticide mobility index, attenuation factor (AF). We employ a combination of first-order uncertainty analysis (FOUA) and sensitivity analysis to investigate recharge uncertainties for agricultural land on the island of O'ahu, Hawai'i, that is currently, or has been in the past, under sugarcane or pineapple cultivation. Uncertainty in recharge due to recharge component uncertainties is 49% of the mean for sugarcane and 58% of the mean for pineapple. The components contributing the largest amounts of uncertainty to the recharge estimate are irrigation in the case of sugarcane and precipitation in the case of pineapple. For a suite of pesticides formerly or currently used in the region, the contribution to AF uncertainty of recharge uncertainty was compared with the contributions of other AF components: retardation factor (RF), a measure of the effects of sorption; soil-water content at field capacity (ΘFC); and pesticide half-life (t1/2). Depending upon the pesticide, the contribution of recharge to uncertainty ranks second or third among the four AF components tested. The natural temporal variability of recharge is another source of uncertainty in AF, because the index is calculated using the time-averaged recharge rate. Relative to the mean, recharge variability is 10%, 44%, and 176% for the annual, monthly, and daily time scales, respectively, under sugarcane, and 31%, 112%, and 344%, respectively, under pineapple. In general, uncertainty in AF associated with temporal variability in recharge at all time scales exceeds AF. For chemicals such as atrazine or diuron under sugarcane, and atrazine or bromacil under pineapple, the range of AF uncertainty due to temporal variability in recharge encompasses significantly higher levels of leaching potential at some locations than that indicated by the

  7. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  8. A probabilistic parametrization for geological uncertainty estimation using the ensemble Kalman filter (EnKF)

    NARCIS (Netherlands)

    Sebacher, B.; Hanea, R.G.; Heemink, A.

    2013-01-01

    In the past years, many applications of historymatching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can

  9. Fukushima Daiichi Unit 1 Uncertainty Analysis-Exploration of Core Melt Progression Uncertain Parameters-Volume II.

    Energy Technology Data Exchange (ETDEWEB)

    Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on key figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .

  10. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  11. On the Reliability of Optimization Results for Trigeneration Systems in Buildings, in the Presence of Price Uncertainties and Erroneous Load Estimation

    Directory of Open Access Journals (Sweden)

    Antonio Piacentino

    2016-12-01

    Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.

  12. Assessment of groundwater level estimation uncertainty using sequential Gaussian simulation and Bayesian bootstrapping

    Science.gov (United States)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2015-04-01

    Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs

  13. Reducing uncertainty of Monte Carlo estimated fatigue damage in offshore wind turbines using FORM

    DEFF Research Database (Denmark)

    H. Horn, Jan-Tore; Jensen, Jørgen Juncher

    2016-01-01

    Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue...

  14. IAEA CRP on HTGR Uncertainties in Modeling: Assessment of Phase I Lattice to Core Model Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Rouxelin, Pascal Nicolas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented by the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise

  15. Estimated Uncertainties in the Idaho National Laboratory Matched-Index-of-Refraction Lower Plenum Experiment

    International Nuclear Information System (INIS)

    Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson

    2007-01-01

    The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties

  16. Estimating Soil Organic Carbon stocks and uncertainties for the National inventory Report - a study case in Southern Belgium

    Science.gov (United States)

    Chartin, Caroline; Stevens, Antoine; Kruger, Inken; Esther, Goidts; Carnol, Monique; van Wesemael, Bas

    2016-04-01

    As many other countries, Belgium complies with Annex I of the United Nations Framework Convention on Climate Change (UNFCCC). Belgium thus reports its annual greenhouse gas emissions in its national inventory report (NIR), with a distinction between emissions/sequestration in cropland and grassland (EU decision 529/2013). The CO2 fluxes are then based on changes in SOC stocks computed for each of these two types of landuse. These stocks are specified for each of the agricultural regions which correspond to areas with similar agricultural practices (rotations and/or livestock) and yield potentials. For Southern Belgium (Wallonia) consisting of ten agricultural regions, the Soil Monitoring Network (SMN) 'CARBOSOL' has been developed this last decade to survey the state of agricultural soils by quantifying SOC stocks and their evolution in a reasonable number of locations complying with the time and funds allocated. Unfortunately, the 592 points of the CARBOSOL network do not allow a representative and a sound estimation of SOC stocks and its uncertainties for the 20 possible combinations of land use/agricultural regions. Moreover, the SMN CARBIOSOL is based on a legacy database following a convenience scheme sampling strategy rather than a statistical scheme defined by design-based or model-based strategies. Here, we aim to both quantify SOC budgets (i.e., How much?) and spatialize SOC stocks (i.e., Where?) at regional scale (Southern Belgium) based on data from the SMN described above. To this end, we developed a computation procedure based on Digital Soil Mapping techniques and stochastic simulations (Monte-Carlo) allowing the estimation of multiple (10,000) independent spatialized datasets. This procedure accounts for the uncertainties associated to estimations of both i) SOC stock at the pixelscale and ii) parameters of the models. Based on these 10,000 individual realizations of the spatial model, mean SOC stocks and confidence intervals can be then computed at

  17. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data.

    Science.gov (United States)

    Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S

    2015-07-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling

  18. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    Science.gov (United States)

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  19. Estimation of balance uncertainty using Direct Monte Carlo Simulation (DSMC) on a CPU-GPU architecture

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2017-01-01

    Full Text Available The estimation of balance uncertainty using conventional statistical and error propagation methods has been found to be both approximate and laborious to the point of being untenable. Direct Simulation by Monte Carlo (DSMC) has been shown...

  20. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    Science.gov (United States)

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  1. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  2. Estimation of uncertainties from missing higher orders in perturbative calculations

    International Nuclear Information System (INIS)

    Bagnaschi, E.

    2015-05-01

    In this proceeding we present the results of our recent study (hep-ph/1409.5036) of the statistical performances of two different approaches, Scale Variation (SV) and the Bayesian model of Cacciari and Houdeau (CH)(hep-ph/1105.5152) (which we also extend to observables with initial state hadrons), to the estimation of Missing Higher-Order Uncertainties (MHOUs)(hep-ph/1307.1843) in perturbation theory. The behavior of the models is determined by analyzing, on a wide set of observables, how the MHOU intervals they produce are successful in predicting the next orders. We observe that the Bayesian model behaves consistently, producing intervals at 68% Degree of Belief (DoB) comparable with the scale variation intervals with a rescaling factor r larger than 2 and closer to 4. Concerning SV, our analysis allows the derivation of a heuristic Confidence Level (CL) for the intervals. We find that assigning a CL of 68% to the intervals obtained with the conventional choice of varying the scales within a factor of two with respect to the central scale could potentially lead to an underestimation of the uncertainties in the case of observables with initial state hadrons.

  3. Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach

    Science.gov (United States)

    Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel

    2014-05-01

    Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by

  4. The impact of a and b value uncertainty on loss estimation in the reinsurance industry

    Directory of Open Access Journals (Sweden)

    R. Streit

    2000-06-01

    Full Text Available In the reinsurance industry different probabilistic models are currently used for seismic risk analysis. A credible loss estimation of the insured values depends on seismic hazard analysis and on the vulnerability functions of the given structures. Besides attenuation and local soil amplification, the earthquake occurrence model (often represented by the Gutenberg and Richter relation is a key element in the analysis. However, earthquake catalogues are usually incomplete, the time of observation is too short and the data themselves contain errors. Therefore, a and b values can only be estimated with uncertainties. The knowledge of their variation provides a valuable input for earthquake risk analysis, because they allow the probability distribution of expected losses (expressed by Average Annual Loss (AAL to be modelled. The variations of a and b have a direct effect on the estimated exceeding probability and consequently on the calculated loss level. This effect is best illustrated by exceeding probability versus loss level and AAL versus magnitude graphs. The sensitivity of average annual losses due to different a to b ratios and magnitudes is obvious. The estimation of the variation of a and b and the quantification of the sensitivity of calculated losses are fundamental for optimal earthquake risk management. Ignoring these uncertainties means that risk management decisions neglect possible variations of the earthquake loss estimations.

  5. Estimation of uncertainty of a reference material for proficiency testing for the determination of total mercury in fish in nature

    International Nuclear Information System (INIS)

    Santana, L V; Sarkis, J E S; Ulrich, J C; Hortellani, M A

    2015-01-01

    We provide an uncertainty estimates for homogeneity and stability studies of reference material used in proficiency test for determination of total mercury in fish fresh muscle tissue. Stability was estimated by linear regression and homogeneity by ANOVA. The results indicate that the reference material is both homogeneous and chemically stable over the short term. Total mercury concentration of the muscle tissue, with expanded uncertainty, was 0.294 ± 0.089 μg g −1

  6. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, andParameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  7. Uncertainty estimation and ensemble forecast with a chemistry-transport model - Application to air-quality modeling and simulation

    International Nuclear Information System (INIS)

    Mallet, Vivien

    2005-01-01

    The thesis deals with the evaluation of a chemistry-transport model, not primarily with classical comparisons to observations, but through the estimation of its a priori uncertainties due to input data, model formulation and numerical approximations. These three uncertainty sources are studied respectively on the basis of Monte Carlos simulations, multi-models simulations and numerical schemes inter-comparisons. A high uncertainty is found, in output ozone concentrations. In order to overtake the limitations due to the uncertainty, a solution is ensemble forecast. Through combinations of several models (up to forty-eight models) on the basis of past observations, the forecast can be significantly improved. The achievement of this work has also led to develop the innovative modelling-system Polyphemus. (author) [fr

  8. A NEW METHOD FOR PREDICTING SURVIVAL AND ESTIMATING UNCERTAINTY IN TRAUMA PATIENTS

    Directory of Open Access Journals (Sweden)

    V. G. Schetinin

    2017-01-01

    Full Text Available The Trauma and Injury Severity Score (TRISS is the current “gold” standard of screening patient’s condition for purposes of predicting survival probability. More than 40 years of TRISS practice revealed a number of problems, particularly, 1 unexplained fluctuation of predicted values caused by aggregation of screening tests, and 2 low accuracy of uncertainty intervals estimations. We developed a new method made it available for practitioners as a web calculator to reduce negative effect of factors given above. The method involves Bayesian methodology of statistical inference which, being computationally expensive, in theory provides most accurate predictions. We implemented and tested this approach on a data set including 571,148 patients registered in the US National Trauma Data Bank (NTDB with 1–20 injuries. These patients were distributed over the following categories: (1 174,647 with 1 injury, (2 381,137 with 2–10 injuries, and (3 15,364 with 11–20 injuries. Survival rates in each category were 0.977, 0.953, and 0.831, respectively. The proposed method has improved prediction accuracy by 0.04%, 0.36%, and 3.64% (p-value <0.05 in the categories 1, 2, and 3, respectively. Hosmer-Lemeshow statistics showed a significant improvement of the new model calibration. The uncertainty 2σ intervals were reduced from 0.628 to 0.569 for patients of the second category and from 1.227 to 0.930 for patients of the third category, both with p-value <0.005. The new method shows the statistically significant improvement (p-value <0.05 in accuracy of predicting survival and estimating the uncertainty intervals. The largest improvement has been achieved for patients with 11–20 injuries. The method is available for practitioners as a web calculator http://www.traumacalc.org.

  9. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    Science.gov (United States)

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is

  10. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  11. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    Science.gov (United States)

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  12. WE-B-19A-01: SRT II: Uncertainties in SRT

    International Nuclear Information System (INIS)

    Dieterich, S; Schlesinger, D; Geneser, S

    2014-01-01

    SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring

  13. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  14. Uncertainty in estimating and mitigating industrial related GHG emissions

    International Nuclear Information System (INIS)

    El-Fadel, M.; Zeinati, M.; Ghaddar, N.; Mezher, T.

    2001-01-01

    Global climate change has been one of the challenging environmental concerns facing policy makers in the past decade. The characterization of the wide range of greenhouse gas emissions sources and sinks as well as their behavior in the atmosphere remains an on-going activity in many countries. Lebanon, being a signatory to the Framework Convention on Climate Change, is required to submit and regularly update a national inventory of greenhouse gas emissions sources and removals. Accordingly, an inventory of greenhouse gases from various sectors was conducted following the guidelines set by the United Nations Intergovernmental Panel on Climate Change (IPCC). The inventory indicated that the industrial sector contributes about 29% to the total greenhouse gas emissions divided between industrial processes and energy requirements at 12 and 17%, respectively. This paper describes major mitigation scenarios to reduce emissions from this sector based on associated technical, economic, environmental, and social characteristics. Economic ranking of these scenarios was conducted and uncertainty in emission factors used in the estimation process was emphasized. For this purpose, theoretical and experimental emission factors were used as alternatives to default factors recommended by the IPCC and the significance of resulting deviations in emission estimation is presented. (author)

  15. Conversion factor and uncertainty estimation for quantification of towed gamma-ray detector measurements in Tohoku coastal waters

    International Nuclear Information System (INIS)

    Ohnishi, S.; Thornton, B.; Kamada, S.; Hirao, Y.; Ura, T.; Odano, N.

    2016-01-01

    Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.

  16. Conversion factor and uncertainty estimation for quantification of towed gamma-ray detector measurements in Tohoku coastal waters

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, S., E-mail: ohnishi@nmri.go.jp [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan); Thornton, B. [Institute of Industrial Science, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Kamada, S.; Hirao, Y.; Ura, T.; Odano, N. [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan)

    2016-05-21

    Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.

  17. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    Science.gov (United States)

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application

  18. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  19. Application Of Global Sensitivity Analysis And Uncertainty Quantification In Dynamic Modelling Of Micropollutants In Stormwater Runoff

    DEFF Research Database (Denmark)

    Vezzaro, Luca; Mikkelsen, Peter Steen

    2012-01-01

    of uncertainty in a conceptual lumped dynamic stormwater runoff quality model that is used in a study catchment to estimate (i) copper loads, (ii) compliance with dissolved Cu concentration limits on stormwater discharge and (iii) the fraction of Cu loads potentially intercepted by a planned treatment facility...

  20. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...

  1. Conclusions on measurement uncertainty in microbiology.

    Science.gov (United States)

    Forster, Lynne I

    2009-01-01

    Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.

  2. The impact of rock and fluid uncertainties in the estimation of saturation and pressure from a 4D petro elastic inversion

    International Nuclear Information System (INIS)

    Pazetti, Bruno; Davolio, Alessandra; Schiozer, Denis J

    2015-01-01

    The integration of 4D seismic (4DS) attributes and reservoir simulation is used to reduce risks in the management of petroleum fields. One possible alternative is the saturation and pressure domain. In this case, we use estimations of saturation and pressure changes from 4D seismic data as input in history matching processes to yield more reliable production predictions in simulation models. The estimation of dynamic changes from 4DS depends on the knowledge of reservoir rock and fluid properties that are uncertain in the process of estimation. This paper presents a study of the impact of rock and fluid uncertainties on the estimation of saturation and pressure changes achieved through a 4D petro-elastic inversion. The term impact means that the saturation and pressure estimation can be perturbed by the rock and fluid uncertainties. The motivation for this study comes from the necessity to estimate uncertainties in saturation and pressure variation to incorporate them in the history matching procedures, avoiding the use of deterministic values from 4DS, which may not be reliable. The study is performed using a synthetic case with known response from where it is possible to show that the errors of estimated saturation and pressure depend on the magnitude of rock and fluid uncertainties jointly with the reservoir dynamic changes. The main contribution of this paper is to show how uncertain reservoir properties can affect the reliability of pressure and saturation estimation from 4DS and how it depends on reservoir changes induced by production. This information can be used in future projects which use quantitative inversion to integrate reservoir simulation and 4D seismic data. (paper)

  3. Quantifying uncertainties in the estimation of safety parameters by using bootstrapped artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Piercesare [MOX, Department of Mathematics, Polytechnic of Milan (Italy); Zio, Enrico [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)], E-mail: enrico.zio@polimi.it; Di Maio, Francesco [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)

    2008-12-15

    For licensing purposes, safety cases of Nuclear Power Plants (NPPs) must be presented at the Regulatory Authority with the necessary confidence on the models used to describe the plant safety behavior. In principle, this requires the repetition of a large number of model runs to account for the uncertainties inherent in the model description of the true plant behavior. The present paper propounds the use of bootstrapped Artificial Neural Networks (ANNs) for performing the numerous model output calculations needed for estimating safety margins with appropriate confidence intervals. Account is given both to the uncertainties inherent in the plant model and to those introduced by the ANN regression models used for performing the repeated safety parameter evaluations. The proposed framework of analysis is first illustrated with reference to a simple analytical model and then to the estimation of the safety margin on the maximum fuel cladding temperature reached during a complete group distribution header blockage scenario in a RBMK-1500 nuclear reactor. The results are compared with those obtained by a traditional parametric approach.

  4. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    Science.gov (United States)

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  5. A combination Kalman filter approach for State of Charge estimation of lithium-ion battery considering model uncertainty

    International Nuclear Information System (INIS)

    Li, Yanwen; Wang, Chao; Gong, Jinfeng

    2016-01-01

    An accurate battery State of Charge estimation plays an important role in battery electric vehicles. This paper makes two contributions to the existing literature. (1) A recursive least squares method with fuzzy adaptive forgetting factor has been presented to update the model parameters close to the real value more quickly. (2) The statistical information of the innovation sequence obeying chi-square distribution has been introduced to identify model uncertainty, and a novel combination algorithm of strong tracking unscented Kalman filter and adaptive unscented Kalman filter has been developed to estimate SOC (State of Charge). Experimental results indicate that the novel algorithm has a good performance in estimating the battery SOC against initial SOC errors and voltage sensor drift. A comparison with the unscented Kalman filter-based algorithms and adaptive unscented Kalman filter-based algorithms shows that the proposed SOC estimation method has better accuracy, robustness and convergence behavior. - Highlights: • Recursive least squares method with fuzzy adaptive forgetting factor is presented. • The innovation obeying chi-square distribution is used to identify uncertainty. • A combination Karman filter approach for State of Charge estimation is presented. • The performance of the proposed method is verified by comparison results.

  6. Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor

    Directory of Open Access Journals (Sweden)

    Amos O. Anele

    2018-04-01

    Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.

  7. Estimation of a quantity of interest in uncertainty analysis: Some help from Bayesian decision theory

    International Nuclear Information System (INIS)

    Pasanisi, Alberto; Keller, Merlin; Parent, Eric

    2012-01-01

    In the context of risk analysis under uncertainty, we focus here on the problem of estimating a so-called quantity of interest of an uncertainty analysis problem, i.e. a given feature of the probability distribution function (pdf) of the output of a deterministic model with uncertain inputs. We will stay here in a fully probabilistic setting. A common problem is how to account for epistemic uncertainty tainting the parameter of the probability distribution of the inputs. In the standard practice, this uncertainty is often neglected (plug-in approach). When a specific uncertainty assessment is made, under the basis of the available information (expertise and/or data), a common solution consists in marginalizing the joint distribution of both observable inputs and parameters of the probabilistic model (i.e. computing the predictive pdf of the inputs), then propagating it through the deterministic model. We will reinterpret this approach in the light of Bayesian decision theory, and will put into evidence that this practice leads the analyst to adopt implicitly a specific loss function which may be inappropriate for the problem under investigation, and suboptimal from a decisional perspective. These concepts are illustrated on a simple numerical example, concerning a case of flood risk assessment.

  8. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  9. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    Science.gov (United States)

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful

  10. Parameter and model uncertainty in a life-table model for fine particles (PM2.5: a statistical modeling study

    Directory of Open Access Journals (Sweden)

    Jantunen Matti J

    2007-08-01

    Full Text Available Abstract Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5 are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i plausibility of mortality outcomes and (ii lag, and parameter uncertainties (iii exposure-response coefficients for different mortality outcomes, and (iv exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure

  11. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....

  12. Estimation and Uncertainty Analysis of Flammability Properties of Chemicals using Group-Contribution Property Models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence....... In this study, the MG-GC-factors are estimated using a systematic data and model evaluation methodology in the following way: 1) Data. Experimental flammability data is used from AIChE DIPPR 801 Database. 2) Initialization and sequential parameter estimation. An approximation using linear algebra provides...

  13. Measurement Uncertainty

    Science.gov (United States)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  14. Carbon dioxide and methane measurements from the Los Angeles Megacity Carbon Project - Part 1: calibration, urban enhancements, and uncertainty estimates

    Science.gov (United States)

    Verhulst, Kristal R.; Karion, Anna; Kim, Jooil; Salameh, Peter K.; Keeling, Ralph F.; Newman, Sally; Miller, John; Sloop, Christopher; Pongetti, Thomas; Rao, Preeti; Wong, Clare; Hopkins, Francesca M.; Yadav, Vineet; Weiss, Ray F.; Duren, Riley M.; Miller, Charles E.

    2017-07-01

    We report continuous surface observations of carbon dioxide (CO2) and methane (CH4) from the Los Angeles (LA) Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO) and offshore on San Clemente Island (SCI), one continental site located in Victorville (VIC), in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO) in the San Gabriel Mountains. We find that a local marine background can be established to within ˜ 1 ppm CO2 and ˜ 10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California) site near downtown LA exhibits median hourly enhancements of ˜ 20 ppm CO2 and ˜ 150 ppb CH4 during 2015 as well as ˜ 15 ppm CO2 and ˜ 80 ppb CH4 during mid-afternoon hours (12:00-16:00 LT, local time), which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014). The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much larger than the measurement uncertainty. The background uncertainty for the marine

  15. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    Science.gov (United States)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  16. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    International Nuclear Information System (INIS)

    Charonko, John J; Vlachos, Pavlos P

    2013-01-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost. (paper)

  17. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  18. Uncertainty in Forest Net Present Value Estimations

    Directory of Open Access Journals (Sweden)

    Ilona Pietilä

    2010-09-01

    Full Text Available Uncertainty related to inventory data, growth models and timber price fluctuation was investigated in the assessment of forest property net present value (NPV. The degree of uncertainty associated with inventory data was obtained from previous area-based airborne laser scanning (ALS inventory studies. The study was performed, applying the Monte Carlo simulation, using stand-level growth and yield projection models and three alternative rates of interest (3, 4 and 5%. Timber price fluctuation was portrayed with geometric mean-reverting (GMR price models. The analysis was conducted for four alternative forest properties having varying compartment structures: (A a property having an even development class distribution, (B sapling stands, (C young thinning stands, and (D mature stands. Simulations resulted in predicted yield value (predicted NPV distributions at both stand and property levels. Our results showed that ALS inventory errors were the most prominent source of uncertainty, leading to a 5.1–7.5% relative deviation of property-level NPV when an interest rate of 3% was applied. Interestingly, ALS inventory led to significant biases at the property level, ranging from 8.9% to 14.1% (3% interest rate. ALS inventory-based bias was the most significant in mature stand properties. Errors related to the growth predictions led to a relative standard deviation in NPV, varying from 1.5% to 4.1%. Growth model-related uncertainty was most significant in sapling stand properties. Timber price fluctuation caused the relative standard deviations ranged from 3.4% to 6.4% (3% interest rate. The combined relative variation caused by inventory errors, growth model errors and timber price fluctuation varied, depending on the property type and applied rates of interest, from 6.4% to 12.6%. By applying the methodology described here, one may take into account the effects of various uncertainty factors in the prediction of forest yield value and to supply the

  19. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    Science.gov (United States)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for

  20. Climate Projections and Uncertainty Communication.

    Science.gov (United States)

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  1. Estimating real-time predictive hydrological uncertainty

    NARCIS (Netherlands)

    Verkade, J.S.

    2015-01-01

    Flood early warning systems provide a potentially highly effective flood risk reduction measure. The effectiveness of early warning, however, is affected by forecasting uncertainty: the impossibility of knowing, in advance, the exact future state of hydrological systems. Early warning systems

  2. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    Science.gov (United States)

    Lahiri, B. B.; Ranoo, Surojit; Philip, John

    2017-11-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the

  3. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    International Nuclear Information System (INIS)

    Lahiri, B B; Ranoo, Surojit; Philip, John

    2017-01-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ∼25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and

  4. Combined Estimation of Hydrogeologic Conceptual Model, Parameter, and Scenario Uncertainty with Application to Uranium Transport at the Hanford Site 300 Area

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.

    2007-07-30

    This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow

  5. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  6. A method countries can use to estimate changes in carbon stored in harvested wood products and the uncertainty of such estimates

    Science.gov (United States)

    Kenneth E. Skog; Kim Pingoud; James E. Smith

    2004-01-01

    A method is suggested for estimating additions to carbon stored in harvested wood products (HWP) and for evaluating uncertainty. The method uses data on HWP production and trade from several decades and tracks annual additions to pools of HWP in use, removals from use, additions to solid waste disposal sites (SWDS), and decay from SWDS. The method is consistent with...

  7. Quantification of Safety-Critical Software Test Uncertainty

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Cho, Jaehyun; Lee, Seung Jun; Jung, Wondea

    2015-01-01

    The method, conservatively assumes that the failure probability of a software for the untested inputs is 1, and the failure probability turns in 0 for successful testing of all test cases. However, in reality the chance of failure exists due to the test uncertainty. Some studies have been carried out to identify the test attributes that affect the test quality. Cao discussed the testing effort, testing coverage, and testing environment. Management of the test uncertainties was discussed in. In this study, the test uncertainty has been considered to estimate the software failure probability because the software testing process is considered to be inherently uncertain. A reliability estimation of software is very important for a probabilistic safety analysis of a digital safety critical system of NPPs. This study focused on the estimation of the probability of a software failure that considers the uncertainty in software testing. In our study, BBN has been employed as an example model for software test uncertainty quantification. Although it can be argued that the direct expert elicitation of test uncertainty is much simpler than BBN estimation, however the BBN approach provides more insights and a basis for uncertainty estimation

  8. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    International Nuclear Information System (INIS)

    Attanasi, Emil D.; Coburn, Timothy C.

    2004-01-01

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis

  9. Comparison of the GUM and Monte Carlo methods on the flatness uncertainty estimation in coordinate measuring machine

    Directory of Open Access Journals (Sweden)

    Jalid Abdelilah

    2016-01-01

    Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.

  10. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    Science.gov (United States)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  11. The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates

    Science.gov (United States)

    Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.

    2018-02-01

    We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.

  12. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    Science.gov (United States)

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  13. Evaluation and uncertainty estimates of Charpy-impact data

    International Nuclear Information System (INIS)

    Stallman, F.W.

    1982-01-01

    Shifts in transition temperature and upper-shelf energy from Charpy tests are used to determine the extent of radiation embrittlement in steels. In order to determine these parameters reliably and to obtain uncertainty estimates, curve fitting procedures need to be used. The hyperbolic tangent or similar models have been proposed to fit the temperature-impact-energy curve. These models are not based on the actual fracture mechanics and are indeed poorly suited in many applications. The results may be falsified by forcing an inflexible curve through too many data points. The nonlinearity of the fit poses additional problems. In this paper, a simple linear fit is proposed. By eliminating data which are irrelevant for the determination of a given parameter, better reliability and accuracy can be achieved. Additional input parameters like fluence and irradiation temperature can be included. This is important if there is a large variation of fluence and temperature in different test specimens. The method has been tested with Charpy specimens from the NRC-HSST experiments

  14. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali; McCabe, Matthew; Evans, Jason P.; Mariethoz, Gregoire; Kavetski, Dmitri

    2013-01-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model

  15. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  16. The influence of climate change on flood risks in France ­- first estimates and uncertainty analysis

    OpenAIRE

    Dumas , Patrice; Hallegatte , Sréphane; Quintana-Seguí , Pere; Martin , Eric

    2013-01-01

    International audience; Abstract. This paper proposes a methodology to project the possible evolution of river flood damages due to climate change, and applies it to mainland France. Its main contributions are (i) to demonstrate a methodology to investigate the full causal chain from global climate change to local economic flood losses; (ii) to show that future flood losses may change in a very significant manner over France; (iii) to show that a very large uncertainty arises from the climate...

  17. OECD/CSNI Workshop on Best Estimate Methods and Uncertainty Evaluations - Workshop Proceedings

    International Nuclear Information System (INIS)

    2013-01-01

    Best-Estimate Methods plus Uncertainty Evaluation are gaining increased interest in the licensing process. On the other hand, lessons learnt from the BEMUSE (NEA/CSNI/R(2011)3) and SM2A (NEA/CSNI/R(2011)3) benchmarks, progress of UAM benchmark, and answers to the WGAMA questionnaire on the Use of Best-Estimate Methodologies show that improvements of the present methods are necessary and new applications appear. The objective of this workshop is to provide a forum for a wide range of experts to exchange information in the area of best estimate analysis and uncertainty evaluation methods and address issues drawn-up from BEMUSE, UAM and SM2A activities. Both, improvement of existing methods and recent new developments are included. As a result of the workshop development, a set of recommendations, including lines for future activities were proposed. The organisation of the Workshop was divided into three parts: Opening session including key notes from OECD and IAEA representatives, Technical sessions, and a Wrap-up session. All sessions included a debate with participation from the audience constituted by 71 attendees. The workshop consisted of four technical sessions: a) Development achievements of BEPU methods and State of the Art: The objective of this session was to present the different approaches to deal with Best Estimate codes and uncertainties evaluations. A total of six papers were presented. One initial paper summarized the existing methods; the following open papers were focused on specific methods stressing their bases, peculiarities and advantages. As a result of the session a picture of the current State of the Art was obtained. b) International comparative activities: This session reviewed the set of international activities around the subject of BEPU methods benchmarking and development. From each of the activities a description of the objectives, development, main results, conclusions and recommendations (in case it is finalized) was presented. This

  18. Modeling multibody systems with uncertainties. Part II: Numerical applications

    Energy Technology Data Exchange (ETDEWEB)

    Sandu, Corina, E-mail: csandu@vt.edu; Sandu, Adrian; Ahmadian, Mehdi [Virginia Polytechnic Institute and State University, Mechanical Engineering Department (United States)

    2006-04-15

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties.

  19. Modeling multibody systems with uncertainties. Part II: Numerical applications

    International Nuclear Information System (INIS)

    Sandu, Corina; Sandu, Adrian; Ahmadian, Mehdi

    2006-01-01

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties

  20. Uncertainties achievable for uranium isotope-amount ratios. Estimates based on the precision and accuracy of recent characterization measurements

    International Nuclear Information System (INIS)

    Mathew, K.J.; Essex, R.M.; Gradle, C.; Narayanan, U.

    2015-01-01

    Certified reference materials (CRMs) recently characterized by the NBL for isotope-amount ratios are: (i) CRM 112-A, Uranium (normal) Metal Assay and Isotopic Standard, (ii) CRM 115, Uranium (depleted) Metal Assay and Isotopic Standard, and (iii) CRM 116-A, Uranium (enriched) Metal Assay and Isotopic Standard. NBL also completed re-characterization of the isotope-amount ratios in CRM 125-A, Uranium (UO 2 ) Pellet Assay, Isotopic, and Radio-chronometric Standard. Three different TIMS analytical techniques were employed for the characterization analyses. The total evaporation technique was used for the major isotope-amount ratio measurement, the modified total evaporation technique was used for both the major and minor isotope-amount ratios, and minor isotope-amount ratios were also measured using a Conventional technique. Uncertainties for the characterization studies were calculated from the combined TIMS data sets following the ISO Guide to the expression of uncertainty in measurement. The uncertainty components for the isotope-amount ratio values are discussed. (author)

  1. Uncertainty associated with the gravimetric measurement of particulate matter concentration in ambient air.

    Science.gov (United States)

    Lacey, Ronald E; Faulkner, William Brock

    2015-07-01

    This work applied a propagation of uncertainty method to typical total suspended particulate (TSP) sampling apparatus in order to estimate the overall measurement uncertainty. The objectives of this study were to estimate the uncertainty for three TSP samplers, develop an uncertainty budget, and determine the sensitivity of the total uncertainty to environmental parameters. The samplers evaluated were the TAMU High Volume TSP Sampler at a nominal volumetric flow rate of 1.42 m3 min(-1) (50 CFM), the TAMU Low Volume TSP Sampler at a nominal volumetric flow rate of 17 L min(-1) (0.6 CFM) and the EPA TSP Sampler at the nominal volumetric flow rates of 1.1 and 1.7 m3 min(-1) (39 and 60 CFM). Under nominal operating conditions the overall measurement uncertainty was found to vary from 6.1x10(-6) g m(-3) to 18.0x10(-6) g m(-3), which represented an uncertainty of 1.7% to 5.2% of the measurement. Analysis of the uncertainty budget determined that three of the instrument parameters contributed significantly to the overall uncertainty: the uncertainty in the pressure drop measurement across the orifice meter during both calibration and testing and the uncertainty of the airflow standard used during calibration of the orifice meter. Five environmental parameters occurring during field measurements were considered for their effect on overall uncertainty: ambient TSP concentration, volumetric airflow rate, ambient temperature, ambient pressure, and ambient relative humidity. Of these, only ambient TSP concentration and volumetric airflow rate were found to have a strong effect on the overall uncertainty. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically. This work addresses measurement uncertainty of TSP samplers used in ambient conditions. Estimation of uncertainty in gravimetric measurements is of particular interest, since as ambient particulate

  2. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  3. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  4. Cost uncertainty for different levels of technology maturity

    International Nuclear Information System (INIS)

    DeMuth, S.F.; Franklin, A.L.

    1996-01-01

    It is difficult at best to apply a single methodology for estimating cost uncertainties related to technologies of differing maturity. While highly mature technologies may have significant performance and manufacturing cost data available, less well developed technologies may be defined in only conceptual terms. Regardless of the degree of technical maturity, often a cost estimate relating to application of the technology may be required to justify continued funding for development. Yet, a cost estimate without its associated uncertainty lacks the information required to assess the economic risk. For this reason, it is important for the developer to provide some type of uncertainty along with a cost estimate. This study demonstrates how different methodologies for estimating uncertainties can be applied to cost estimates for technologies of different maturities. For a less well developed technology an uncertainty analysis of the cost estimate can be based on a sensitivity analysis; whereas, an uncertainty analysis of the cost estimate for a well developed technology can be based on an error propagation technique from classical statistics. It was decided to demonstrate these uncertainty estimation techniques with (1) an investigation of the additional cost of remediation due to beyond baseline, nearly complete, waste heel retrieval from underground storage tanks (USTs) at Hanford; and (2) the cost related to the use of crystalline silico-titanate (CST) rather than the baseline CS100 ion exchange resin for cesium separation from UST waste at Hanford

  5. Uncertainty of forest carbon stock changes. Implications to the total uncertainty of GHG inventory of Finland

    International Nuclear Information System (INIS)

    Monni, S.; Savolainen, I.; Peltoniemi, M.; Lehtonen, A.; Makipaa, R.; Palosuo, T.

    2007-01-01

    Uncertainty analysis facilitates identification of the most important categories affecting greenhouse gas (GHG) inventory uncertainty and helps in prioritisation of the efforts needed for development of the inventory. This paper presents an uncertainty analysis of GHG emissions of all Kyoto sectors and gases for Finland consolidated with estimates of emissions/removals from LULUCF categories. In Finland, net GHG emissions in 2003 were around 69 Tg (±15 Tg) CO2 equivalents. The uncertainties in forest carbon sink estimates in 2003 were larger than in most other emission categories, but of the same order of magnitude as in carbon stock change estimates in other land use, land-use change and forestry (LULUCF) categories, and in N2O emissions from agricultural soils. Uncertainties in sink estimates of 1990 were lower, due to better availability of data. Results of this study indicate that inclusion of the forest carbon sink to GHG inventories reported to the UNFCCC increases uncertainties in net emissions notably. However, the decrease in precision is accompanied by an increase in the accuracy of the overall net GHG emissions due to improved completeness of the inventory. The results of this study can be utilised when planning future GHG mitigation protocols and emission trading schemes and when analysing environmental benefits of climate conventions

  6. RELAP5 simulation of surge line break accident using combined and best estimate plus uncertainty approaches

    International Nuclear Information System (INIS)

    Kristof, Marian; Kliment, Tomas; Petruzzi, Alessandro; Lipka, Jozef

    2009-01-01

    Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.

  7. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  8. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  9. Preliminary uncertainty analysis for the doses estimated using the Techa River dosimetry system - 2000

    International Nuclear Information System (INIS)

    Napier, Bruce A.; Shagina, N B.; Degteva, M O.; Tolstykh, E I.; Vorobiova, M I.; Anspaugh, L R.

    2000-01-01

    The Mayak Production Association (MPA) was the first facility in the former Soviet Union for the production of plutonium. As a result of failures in the technological processes in the late 1940's and early 1950's, members of the public were exposed via discharge of about 1017 Bq of liquid wastes into the Techa River (1949-1956). Residents of many villages downstream on the Techa River were exposed via a variety of pathways; the more significant included drinking of water from the river and external gamma exposure due to proximity to sediments and shoreline. The specific aim of this project is to enhance the reconstruction of external and internal radiation doses for individuals in the Extended Techa River Cohort. The purpose of this paper is to present the approaches being used to evaluate the uncertainty in the calculated individual doses and to provide example and representative results of the uncertainty analyses. The magnitude of the uncertainties varies depending on location and time of individual exposure, but the results from reference-individual calculations indicate that for external doses, the range of uncertainty is about factors of four to five. For internal doses, the range of uncertainty depends on village of residence, which is actually a surrogate for source of drinking water. For villages with single sources of drinking water (river or well), the ratio of the 97.5th percentile-to 2.5th percentile estimates can be a factor of 20 to 30. For villages with mixed sources of drinking water (river and well), the ratio of the range can be over two orders of magnitude

  10. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  11. Uncertainty & sensitivity analysis of economic assessment of lactic acid production from crude glycerol - impact of price correlations

    DEFF Research Database (Denmark)

    Loureiro da Costa Lira Gargalo, Carina; Carvalho, Ana; Gernaey, Krist V.

    2017-01-01

    In this work, we investigate the impact of the expected price volatility and correlations on the overall economic assessment of lactic acid production from crude glycerol. In particular, the goals of this study are three-fold: (i) to understand how uncertainty in the inputs propagates to the model...... outputs; (ii) to understand the effect of the degree of pairwise correlation between input uncertainties on each other and on the outputs from the economic model (Net Present Value); and lastly, (iii) to estimate the first-order as well as independent sensitivity indices so as to identify which...... of the input uncertainties in the economic analysis affect the estimated NPV the most. To this end, we implemented algorithms in Matlab (R2015a) based on Monte Carlo sampling with permutation using Latin Hypercube Sampling with Iman Conover correlation control (Sin et al., 2009).The results have shown...

  12. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    Science.gov (United States)

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional

  13. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    Science.gov (United States)

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship

  14. Use of Atmospheric Budget to Reduce Uncertainty in Estimated Water Availability over South Asia from Different Reanalyses

    Science.gov (United States)

    Sebastian, Dawn Emil; Pathak, Amey; Ghosh, Subimal

    2016-07-01

    Disagreements across different reanalyses over South Asia result into uncertainty in assessment of water availability, which is computed as the difference between Precipitation and Evapotranspiration (P-E). Here, we compute P-E directly from atmospheric budget with divergence of moisture flux for different reanalyses and find improved correlation with observed values of P-E, acquired from station and satellite data. We also find reduced closure terms for water cycle computed with atmospheric budget, analysed over South Asian landmass, when compared to that obtained with individual values of P and E. The P-E value derived with atmospheric budget is more consistent with energy budget, when we use top-of-atmosphere radiation for the same. For analysing water cycle, we use runoff from Global Land Data Assimilation System, and water storage from Gravity Recovery and Climate Experiment. We find improvements in agreements across different reanalyses, in terms of inter-annual cross correlation when atmospheric budget is used to estimate P-E and hence, emphasize to use the same for estimations of water availability in South Asia to reduce uncertainty. Our results on water availability with reduced uncertainty over highly populated monsoon driven South Asia will be useful for water management and agricultural decision making.

  15. Use of Atmospheric Budget to Reduce Uncertainty in Estimated Water Availability over South Asia from Different Reanalyses.

    Science.gov (United States)

    Sebastian, Dawn Emil; Pathak, Amey; Ghosh, Subimal

    2016-07-08

    Disagreements across different reanalyses over South Asia result into uncertainty in assessment of water availability, which is computed as the difference between Precipitation and Evapotranspiration (P-E). Here, we compute P-E directly from atmospheric budget with divergence of moisture flux for different reanalyses and find improved correlation with observed values of P-E, acquired from station and satellite data. We also find reduced closure terms for water cycle computed with atmospheric budget, analysed over South Asian landmass, when compared to that obtained with individual values of P and E. The P-E value derived with atmospheric budget is more consistent with energy budget, when we use top-of-atmosphere radiation for the same. For analysing water cycle, we use runoff from Global Land Data Assimilation System, and water storage from Gravity Recovery and Climate Experiment. We find improvements in agreements across different reanalyses, in terms of inter-annual cross correlation when atmospheric budget is used to estimate P-E and hence, emphasize to use the same for estimations of water availability in South Asia to reduce uncertainty. Our results on water availability with reduced uncertainty over highly populated monsoon driven South Asia will be useful for water management and agricultural decision making.

  16. Estimating the uncertainty from sampling in pollution crime investigation: The importance of metrology in the forensic interpretation of environmental data.

    Science.gov (United States)

    Barazzetti Barbieri, Cristina; de Souza Sarkis, Jorge Eduardo

    2018-07-01

    The forensic interpretation of environmental analytical data is usually challenging due to the high geospatial variability of these data. The measurements' uncertainty includes contributions from the sampling and from the sample handling and preparation processes. These contributions are often disregarded in analytical techniques results' quality assurance. A pollution crime investigation case was used to carry out a methodology able to address these uncertainties in two different environmental compartments, freshwater sediments and landfill leachate. The methodology used to estimate the uncertainty was the duplicate method (that replicates predefined steps of the measurement procedure in order to assess its precision) and the parameters used to investigate the pollution were metals (Cr, Cu, Ni, and Zn) in the leachate, the suspect source, and in the sediment, the possible sink. The metal analysis results were compared to statutory limits and it was demonstrated that Cr and Ni concentrations in sediment samples exceeded the threshold levels at all sites downstream the pollution sources, considering the expanded uncertainty U of the measurements and a probability of contamination >0.975, at most sites. Cu and Zn concentrations were above the statutory limits at two sites, but the classification was inconclusive considering the uncertainties of the measurements. Metal analyses in leachate revealed that Cr concentrations were above the statutory limits with a probability of contamination >0.975 in all leachate ponds while the Cu, Ni and Zn probability of contamination was below 0.025. The results demonstrated that the estimation of the sampling uncertainty, which was the dominant component of the combined uncertainty, is required for a comprehensive interpretation of the environmental analyses results, particularly in forensic cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. A real-time assessment of measurement uncertainty in the experimental characterization of sprays

    International Nuclear Information System (INIS)

    Panão, M R O; Moreira, A L N

    2008-01-01

    This work addresses the estimation of the measurement uncertainty of discrete probability distributions used in the characterization of sprays. A real-time assessment of this measurement uncertainty is further investigated, particularly concerning the informative quality of the measured distribution and the influence of acquiring additional information on the knowledge retrieved from statistical analysis. The informative quality is associated with the entropy concept as understood in information theory (Shannon entropy), normalized by the entropy of the most informative experiment. A new empirical correlation is derived between the error accuracy of a discrete cumulative probability distribution and the normalized Shannon entropy. The results include case studies using: (i) spray impingement measurements to study the applicability of the real-time assessment of measurement uncertainty, and (ii) the simulation of discrete probability distributions of unknown shape or function to test the applicability of the new correlation

  18. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    Science.gov (United States)

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  20. THE SPECTRUM OF Fe II

    Energy Technology Data Exchange (ETDEWEB)

    Nave, Gillian [National Institute of Standards and Technology, Gaithersburg, MD 20899-8422 (United States); Johansson, Sveneric, E-mail: gillian.nave@nist.gov [Lund Observatory, University of Lund, Box 43, SE-22100 Lund (Sweden)

    2013-01-15

    The spectrum of singly ionized iron (Fe II) has been recorded using high-resolution Fourier transform (FT) and grating spectroscopy over the wavelength range 900 A to 5.5 {mu}m. The spectra were observed in high-current continuous and pulsed hollow cathode discharges using FT spectrometers at the Kitt Peak National Observatory, Tucson, AZ and Imperial College, London and with the 10.7 m Normal Incidence Spectrograph at the National Institute of Standards and Technology. Roughly 12,900 lines were classified using 1027 energy levels of Fe II that were optimized to measured wavenumbers. The wavenumber uncertainties of lines in the FT spectra range from 10{sup -4} cm{sup -1} for strong lines around 4 {mu}m to 0.05 cm{sup -1} for weaker lines around 1500 A. The wavelength uncertainty of lines in the grating spectra is 0.005 A. The ionization energy of (130,655.4 {+-} 0.4) cm{sup -1} was estimated from the 3d{sup 6}({sup 5}D)5g and 3d{sup 6}({sup 5}D)6h levels.

  1. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  2. Summary of uncertainty estimation results for Hanford tank chemical and radionuclide inventories

    International Nuclear Information System (INIS)

    Ferryman, T.A.; Amidan, B.G.; Chen, G.

    1998-09-01

    The exact physical and chemical nature of 55 million gallons of radioactive waste held in 177 underground waste tanks at the Hanford Site is not known in sufficient detail to support safety, retrieval, and immobilization missions. The Hanford Engineering Analysis Best-Basis team has made point estimates of the inventories in each tank. The purpose of this study is to estimate probability distributions for each of the analytes and tanks for which the Hanford Best-Basis team has made point estimates. Uncertainty intervals can then be calculated for the Best-Basis inventories and should facilitate the cleanup missions. The methodology used to generate the results published in the Tank Characterization Database (TCD) and summarized in this paper is based on scientific principles, sound technical knowledge of the realities associated with the Hanford waste tanks, the chemical analysis of actual samples from the tanks, the Hanford Best-Basic research, and historical data records. The methodology builds on research conducted by Pacific Northwest National Laboratory (PNNL) over the last few years. Appendix A of this report summarizes the results of the study. The full set of results (in percentiles, 1--99) is available through the TCD, (http://twins.pnl.gov:8001)

  3. Summary of uncertainty estimation results for Hanford tank chemical and radionuclide inventories

    Energy Technology Data Exchange (ETDEWEB)

    Ferryman, T.A.; Amidan, B.G.; Chen, G. [and others

    1998-09-01

    The exact physical and chemical nature of 55 million gallons of radioactive waste held in 177 underground waste tanks at the Hanford Site is not known in sufficient detail to support safety, retrieval, and immobilization missions. The Hanford Engineering Analysis Best-Basis team has made point estimates of the inventories in each tank. The purpose of this study is to estimate probability distributions for each of the analytes and tanks for which the Hanford Best-Basis team has made point estimates. Uncertainty intervals can then be calculated for the Best-Basis inventories and should facilitate the cleanup missions. The methodology used to generate the results published in the Tank Characterization Database (TCD) and summarized in this paper is based on scientific principles, sound technical knowledge of the realities associated with the Hanford waste tanks, the chemical analysis of actual samples from the tanks, the Hanford Best-Basic research, and historical data records. The methodology builds on research conducted by Pacific Northwest National Laboratory (PNNL) over the last few years. Appendix A of this report summarizes the results of the study. The full set of results (in percentiles, 1--99) is available through the TCD, (http://twins.pnl.gov:8001).

  4. Statistical characterization of roughness uncertainty and impact on wind resource estimation

    DEFF Research Database (Denmark)

    Kelly, Mark C.; Ejsing Jørgensen, Hans

    2017-01-01

    In this work we relate uncertainty in background roughness length (z0) to uncertainty in wind speeds, where the latter are predicted at a wind farm location based on wind statistics observed at a different site. Sensitivity of predicted winds to roughness is derived analytically for the industry...... between mean wind speed and AEP. Following our developments, we provide guidance on approximate roughness uncertainty magnitudes to be expected in industry practice, and we also find that sites with larger background roughness incur relatively larger uncertainties.......-standard European Wind Atlas method, which is based on the geostrophic drag law. We statistically consider roughness and its corresponding uncertainty, in terms of both z0 derived from measured wind speeds as well as that chosen in practice by wind engineers. We show the combined effect of roughness uncertainty...

  5. Uncertainty Estimate of Surface Irradiances Computed with MODIS-, CALIPSO-, and CloudSat-Derived Cloud and Aerosol Properties

    Science.gov (United States)

    Kato, Seiji; Loeb, Norman G.; Rutan, David A.; Rose, Fred G.; Sun-Mack, Sunny; Miller, Walter F.; Chen, Yan

    2012-07-01

    Differences of modeled surface upward and downward longwave and shortwave irradiances are calculated using modeled irradiance computed with active sensor-derived and passive sensor-derived cloud and aerosol properties. The irradiance differences are calculated for various temporal and spatial scales, monthly gridded, monthly zonal, monthly global, and annual global. Using the irradiance differences, the uncertainty of surface irradiances is estimated. The uncertainty (1σ) of the annual global surface downward longwave and shortwave is, respectively, 7 W m-2 (out of 345 W m-2) and 4 W m-2 (out of 192 W m-2), after known bias errors are removed. Similarly, the uncertainty of the annual global surface upward longwave and shortwave is, respectively, 3 W m-2 (out of 398 W m-2) and 3 W m-2 (out of 23 W m-2). The uncertainty is for modeled irradiances computed using cloud properties derived from imagers on a sun-synchronous orbit that covers the globe every day (e.g., moderate-resolution imaging spectrometer) or modeled irradiances computed for nadir view only active sensors on a sun-synchronous orbit such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation and CloudSat. If we assume that longwave and shortwave uncertainties are independent of each other, but up- and downward components are correlated with each other, the uncertainty in global annual mean net surface irradiance is 12 W m-2. One-sigma uncertainty bounds of the satellite-based net surface irradiance are 106 W m-2 and 130 W m-2.

  6. Uncertainties in estimating the {sup 90}Sr and actinides inventory in SFR 1; Osaekerheter vid uppskattning av Sr-90 och aktinidinventariet i SFR 1

    Energy Technology Data Exchange (ETDEWEB)

    Ingemansson, Tor [ALARA Engineering AB, Skultuna (Sweden)

    2000-04-01

    SFR-1 is a facility for disposal of low and intermediate level radioactive waste. The uncertainty in estimation of the activity accumulated in different cleaning filters, originating in the Swedish BWR-, PWR-reactors and CLAB - the Central interim storage facility for spent nuclear fuel - has been analyzed to be 10 - 14%, depending on the methods used for measuring the activity at the power plants. Other waste or scrap contribute with approx. 1.5% of the total amount of actinides and {sup 90}Sr. The uncertainty in this fraction is about 20%. The uncertainties are surprisingly small, and explain the good agreement between estimates made with different methods.

  7. Uncertainties in the Norwegian greenhouse gas emission inventory

    Energy Technology Data Exchange (ETDEWEB)

    Flugsrud, Ketil; Hoem, Britta

    2011-11-15

    The national greenhouse gas (GHG) emission inventory is compiled from estimates based on emission factors and activity data and from direct measurements by plants. All these data and parameters will contribute to the overall inventory uncertainty. The uncertainties and probability distributions of the inventory input parameters have been assessed based on available data and expert judgements.Finally, the level and trend uncertainties of the national GHG emission inventory have been estimated using Monte Carlo simulation. The methods used in the analysis correspond to an IPCC tier 2 method, as described in the IPCC Good Practice Guidance (IPCC 2000) (IPCC 2000). Analyses have been made both excluding and including the sector LULUCF (land use, land-use change and forestry). The uncertainty analysis performed in 2011 is an update of the uncertainty analyses performed for the greenhouse gas inventory in 2006 and 2000. During the project we have been in contact with experts, and have collected information about uncertainty from them. Main focus has been on the source categories where changes have occured since the last uncertainty analysis was performed in 2006. This includes new methodology for several source categories (for example for solvents and road traffic) as well as revised uncertainty estimates. For the installations included in the emission trading system, new information from the annual ETS reports about uncertainty in activity data and CO2 emission factor (and N2O emission factor for nitric acid production) has been used. This has improved the quality of the uncertainty estimates for the energy and manufacturing sectors. The results show that the uncertainty level in the total calculated greenhouse gas emissions for 2009 is around 4 per cent. When including the LULUCF sector, the total uncertainty is around 17 per cent in 2009. The uncertainty estimate is lower now than previous analyses have shown. This is partly due to a considerable work made to improve

  8. Stream temperature estimated in situ from thermal-infrared images: best estimate and uncertainty

    International Nuclear Information System (INIS)

    Iezzi, F; Todisco, M T

    2015-01-01

    The paper aims to show a technique to estimate in situ the stream temperature from thermal-infrared images deepening its best estimate and uncertainty. Stream temperature is an important indicator of water quality and nowadays its assessment is important particularly for thermal pollution monitoring in water bodies. Stream temperature changes are especially due to the anthropogenic heat input from urban wastewater and from water used as a coolant by power plants and industrial manufacturers. The stream temperatures assessment using ordinary techniques (e.g. appropriate thermometers) is limited by sparse sampling in space due to a spatial discretization necessarily punctual. Latest and most advanced techniques assess the stream temperature using thermal-infrared remote sensing based on thermal imagers placed usually on aircrafts or using satellite images. These techniques assess only the surface water temperature and they are suitable to detect the temperature of vast water bodies but do not allow a detailed and precise surface water temperature assessment in limited areas of the water body. The technique shown in this research is based on the assessment of thermal-infrared images obtained in situ via portable thermal imager. As in all thermographic techniques, also in this technique, it is possible to estimate only the surface water temperature. A stream with the presence of a discharge of urban wastewater is proposed as case study to validate the technique and to show its application limits. Since the technique analyzes limited areas in extension of the water body, it allows a detailed and precise assessment of the water temperature. In general, the punctual and average stream temperatures are respectively uncorrected and corrected. An appropriate statistical method that minimizes the errors in the average stream temperature is proposed. The correct measurement of this temperature through the assessment of thermal- infrared images obtained in situ via portable

  9. Safety aspects of advanced fuels irradiations in EBR-II

    International Nuclear Information System (INIS)

    Lehto, W.K.

    1975-09-01

    Basic safety questions such as MFCI, loss-of-Na bond, pin behavior during design basis transients, and failure propagation were evaluated as they pertain to advanced fuels in EBR-II. With the exception of pin response to the unlikely loss-of-flow transient, the study indicates that irradiation of significant numbers of advanced fueled subassemblies in EBR-II should pose no safety problems. The analysis predicts, however, that Na boiling may occur during the postulated design basis unlikely loss-of-flow transient in subassemblies containing He-bonded fuel pins with the larger fuel-clad gaps. The calculations indicate that coolant temperatures at top of core in the limiting S/A's, containing the He bonded pins, would reach approximately 1480 0 F during the transient without application of uncertainty factors. Inclusion of uncertainties could result in temperature predictions which approach coolant boiling temperatures (1640 0 F). Further analysis of He-bonded pins is being done in this potential problem area, e.g., to apply best estimates of uncertainty factors and to determine the sensitivity of the preliminary results to gap conductance

  10. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  11. A two-step combination of top-down and bottom-up fire emission estimates at regional and global scales: strengths and main uncertainties

    Science.gov (United States)

    Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje

    2016-04-01

    Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization

  12. Section summary: Uncertainty and design considerations

    Science.gov (United States)

    Stephen Hagen

    2013-01-01

    Well planned sampling designs and robust approaches to estimating uncertainty are critical components of forest monitoring. The importance of uncertainty estimation increases as deforestation and degradation issues become more closely tied to financing incentives for reducing greenhouse gas emissions in the forest sector. Investors like to know risk and risk is tightly...

  13. Carbon dioxide and methane measurements from the Los Angeles Megacity Carbon Project – Part 1: calibration, urban enhancements, and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    K. R. Verhulst

    2017-07-01

    Full Text Available We report continuous surface observations of carbon dioxide (CO2 and methane (CH4 from the Los Angeles (LA Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO and offshore on San Clemente Island (SCI, one continental site located in Victorville (VIC, in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO in the San Gabriel Mountains. We find that a local marine background can be established to within  ∼  1 ppm CO2 and  ∼  10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California site near downtown LA exhibits median hourly enhancements of  ∼  20 ppm CO2 and  ∼  150 ppb CH4 during 2015 as well as  ∼  15 ppm CO2 and  ∼  80 ppb CH4 during mid-afternoon hours (12:00–16:00 LT, local time, which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014. The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much

  14. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data

    Science.gov (United States)

    David M. Bell; Eric J. Ward; A. Christopher Oishi; Ram Oren; Paul G. Flikkema; James S. Clark; David Whitehead

    2015-01-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as...

  15. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  16. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  17. Comparisons and Uncertainty in Fat and Adipose Tissue Estimation Techniques: The Northern Elephant Seal as a Case Study.

    Directory of Open Access Journals (Sweden)

    Lisa K Schwarz

    Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may

  18. Neutron flux uncertainty and covariances for spectrum adjustment and estimation of WWER-1000 pressure vessel fluences

    International Nuclear Information System (INIS)

    Boehmer, Bertram

    2000-01-01

    Results of estimation of the covariance matrix of the neutron spectrum in the WWER-1000 reactor cavity and pressure vessel positions are presented. Two-dimensional calculations with the discrete ordinates transport code DORT in r-theta and r-z-geometry used to determine the neutron group spectrum covariances including gross-correlations between interesting positions. The new Russian ABBN-93 data set and CONSYST code used to supply all transport calculations with group neutron data. All possible sources of uncertainties namely caused by the neutron gross sections, fission sources, geometrical dimensions and material densities considered, whereas the uncertainty of the calculation method was considered negligible in view of the available precision of Monte Carlo simulation used for more precise evaluation of the neutron fluence. (Authors)

  19. Practical Policy Applications of Uncertainty Analysis for National Greenhouse Gas Inventories

    International Nuclear Information System (INIS)

    Gillenwater, M.; Sussman, F.; Cohen, J.

    2007-01-01

    International policy makers and climate researchers use greenhouse gas emissions inventory estimates in a variety of ways. Because of the varied uses of the inventory data, as well as the high uncertainty surrounding some of the source category estimates, considerable effort has been devoted to understanding the causes and magnitude of uncertainty in national emissions inventories. In this paper, we focus on two aspects of the rationale for quantifying uncertainty: (1) the possible uses of the quantified uncertainty estimates for policy (e.g., as a means of adjusting inventories used to determine compliance with international commitments); and (2) the direct benefits of the process of investigating uncertainties in terms of improving inventory quality. We find that there are particular characteristics that an inventory uncertainty estimate should have if it is to be used for policy purposes: (1) it should be comparable across countries; (2) it should be relatively objective, or at least subject to review and verification; (3) it should not be subject to gaming by countries acting in their own self-interest; (4) it should be administratively feasible to estimate and use; (5) the quality of the uncertainty estimate should be high enough to warrant the additional compliance costs that its use in an adjustment factor may impose on countries; and (6) it should attempt to address all types of inventory uncertainty. Currently, inventory uncertainty estimates for national greenhouse gas inventories do not have these characteristics. For example, the information used to develop quantitative uncertainty estimates for national inventories is often based on expert judgments, which are, by definition, subjective rather than objective, and therefore difficult to review and compare. Further, the practical design of a potential factor to adjust inventory estimates using uncertainty estimates would require policy makers to (1) identify clear environmental goals; (2) define these

  20. Practical Policy Applications of Uncertainty Analysis for National Greenhouse Gas Inventories

    Energy Technology Data Exchange (ETDEWEB)

    Gillenwater, M. [Environmental Resources Trust (United States)], E-mail: mgillenwater@ert.net; Sussman, F.; Cohen, J. [ICF International (United States)

    2007-09-15

    International policy makers and climate researchers use greenhouse gas emissions inventory estimates in a variety of ways. Because of the varied uses of the inventory data, as well as the high uncertainty surrounding some of the source category estimates, considerable effort has been devoted to understanding the causes and magnitude of uncertainty in national emissions inventories. In this paper, we focus on two aspects of the rationale for quantifying uncertainty: (1) the possible uses of the quantified uncertainty estimates for policy (e.g., as a means of adjusting inventories used to determine compliance with international commitments); and (2) the direct benefits of the process of investigating uncertainties in terms of improving inventory quality. We find that there are particular characteristics that an inventory uncertainty estimate should have if it is to be used for policy purposes: (1) it should be comparable across countries; (2) it should be relatively objective, or at least subject to review and verification; (3) it should not be subject to gaming by countries acting in their own self-interest; (4) it should be administratively feasible to estimate and use; (5) the quality of the uncertainty estimate should be high enough to warrant the additional compliance costs that its use in an adjustment factor may impose on countries; and (6) it should attempt to address all types of inventory uncertainty. Currently, inventory uncertainty estimates for national greenhouse gas inventories do not have these characteristics. For example, the information used to develop quantitative uncertainty estimates for national inventories is often based on expert judgments, which are, by definition, subjective rather than objective, and therefore difficult to review and compare. Further, the practical design of a potential factor to adjust inventory estimates using uncertainty estimates would require policy makers to (1) identify clear environmental goals; (2) define these

  1. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    Directory of Open Access Journals (Sweden)

    N. Oosthuizen

    2018-05-01

    Full Text Available The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin – the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe – is assessed. Input data (and model parameters are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17

  2. Critical loads - assessment of uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, A.

    1998-10-01

    The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data

  3. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  4. Estimating the uncertainty of damage costs of pollution: A simple transparent method and typical results

    International Nuclear Information System (INIS)

    Spadaro, Joseph V.; Rabl, Ari

    2008-01-01

    Whereas the uncertainty of environmental impacts and damage costs is usually estimated by means of a Monte Carlo calculation, this paper shows that most (and in many cases all) of the uncertainty calculation involves products and/or sums of products and can be accomplished with an analytic solution which is simple and transparent. We present our own assessment of the component uncertainties and calculate the total uncertainty for the impacts and damage costs of the classical air pollutants; results for a Monte Carlo calculation for the dispersion part are also shown. The distribution of the damage costs is approximately lognormal and can be characterized in terms of geometric mean μ g and geometric standard deviation σ g , implying that the confidence interval is multiplicative. We find that for the classical air pollutants σ g is approximately 3 and the 68% confidence interval is [μ g / σ g , μ g σ g ]. Because the lognormal distribution is highly skewed for large σ g , the median is significantly smaller than the mean. We also consider the case where several lognormally distributed damage costs are added, for example to obtain the total damage cost due to all the air pollutants emitted by a power plant, and we find that the relative error of the sum can be significantly smaller than the relative errors of the summands. Even though the distribution for such sums is not exactly lognormal, we present a simple lognormal approximation that is quite adequate for most applications

  5. Nuclear material inventory estimation in solvent extraction contractors II

    International Nuclear Information System (INIS)

    Beyerlein, A.

    1987-11-01

    The effectiveness of near-real-time nuclear materials accounting in reprocessing facilities can be limited by inventory variations in the separations contactors. Investigations are described in three areas: (i) Improvements in the model that the authors have described previously for the steady state inventory estimation in mixer-settler contactors, (ii) extension for the model for steady state inventory estimation to transient inventory estimation for non-steady state conditions, and (iii) the development of a computer model CUSEP (Clemson University Solvent Extraction Program) for simulating the concentration profiles and nuclear material inventories in pulsed column contactors. Improvements in the steady state model that are described in this report are the simplification of the methods for evaluating model parameters and development of methods for reducing the equation which estimates the total inventory of the set of contactors directly. The pulsed column computer model CUSEP (Clemson University Solvent Extraction Program) was developed. Concentration profiles and inventories calculated from CUSEP are compared with measured data from pilot scale contactors containing uranium. Excellent agreement between measured and simulated data for both the concentration profile and inventories is obtained, demonstrating that the program correctly predicts the concentration dispersion caused by pulsing and the dispersed phase holdup within the contactor. Further research to investigate (i) correction of the MUF (Material Unaccounted For) and CUMUF (Cumulative Material Unaccounted For) tests for mixer-settler contactor inventory using the simplified model developed in this work, (ii) development of a simple inventory estimation model for pulsed column contactors similar to that developed for mixer-settler contactors using CUSEP to provide necessary database, and (iii) sources of bias appearing in the MUF and CUMUF tests using computer simulation techniques are planned. Refs

  6. Estimation of forest aboveground biomass and uncertainties by integration of field measurements, airborne LiDAR, and SAR and optical satellite data in Mexico.

    Science.gov (United States)

    Urbazaev, Mikhail; Thiel, Christian; Cremer, Felix; Dubayah, Ralph; Migliavacca, Mirco; Reichstein, Markus; Schmullius, Christiane

    2018-02-21

    Information on the spatial distribution of aboveground biomass (AGB) over large areas is needed for understanding and managing processes involved in the carbon cycle and supporting international policies for climate change mitigation and adaption. Furthermore, these products provide important baseline data for the development of sustainable management strategies to local stakeholders. The use of remote sensing data can provide spatially explicit information of AGB from local to global scales. In this study, we mapped national Mexican forest AGB using satellite remote sensing data and a machine learning approach. We modelled AGB using two scenarios: (1) extensive national forest inventory (NFI), and (2) airborne Light Detection and Ranging (LiDAR) as reference data. Finally, we propagated uncertainties from field measurements to LiDAR-derived AGB and to the national wall-to-wall forest AGB map. The estimated AGB maps (NFI- and LiDAR-calibrated) showed similar goodness-of-fit statistics (R 2 , Root Mean Square Error (RMSE)) at three different scales compared to the independent validation data set. We observed different spatial patterns of AGB in tropical dense forests, where no or limited number of NFI data were available, with higher AGB values in the LiDAR-calibrated map. We estimated much higher uncertainties in the AGB maps based on two-stage up-scaling method (i.e., from field measurements to LiDAR and from LiDAR-based estimates to satellite imagery) compared to the traditional field to satellite up-scaling. By removing LiDAR-based AGB pixels with high uncertainties, it was possible to estimate national forest AGB with similar uncertainties as calibrated with NFI data only. Since LiDAR data can be acquired much faster and for much larger areas compared to field inventory data, LiDAR is attractive for repetitive large scale AGB mapping. In this study, we showed that two-stage up-scaling methods for AGB estimation over large areas need to be analyzed and validated

  7. Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty

    International Nuclear Information System (INIS)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-01-01

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  8. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  9. Uncertainty enabled Sensor Observation Services

    Science.gov (United States)

    Cornford, Dan; Williams, Matthew; Bastin, Lucy

    2010-05-01

    Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.

  10. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  11. THE LIFETIME AND POWERS OF FR IIs IN GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Antognini, Joe; Bird, Jonathan; Martini, Paul

    2012-01-01

    We have identified and studied a sample of 151 FR IIs found in brightest cluster galaxies (BCGs) in the MaxBCG cluster catalog with data from FIRST and NVSS. We have compared the radio luminosities and projected lengths of these FR IIs to the projected length distribution of a range of mock catalogs generated by an FR II model and estimate the FR II lifetime to be 1.9 × 10 8 yr. The uncertainty in the lifetime calculation is a factor of two, primarily due to uncertainties in the intracluster medium (ICM) density and the FR II axial ratio. We furthermore measure the jet power distribution of FR IIs in BCGs and find that it is well described by a log-normal distribution with a median power of 1.1 × 10 37 W and a coefficient of variation of 2.2. These jet powers are nearly linearly related to the observed luminosities, and this relation is steeper than many other estimates, although it is dependent on the jet model. We investigate correlations between FR II and cluster properties and find that galaxy luminosity is correlated with jet power. This implies that jet power is also correlated with black hole mass, as the stellar luminosity of a BCG should be a good proxy for its spheroid mass and therefore the black hole mass. Jet power, however, is not correlated with cluster richness, nor is FR II lifetime strongly correlated with any cluster properties. We calculate the enthalpy of the lobes to examine the impact of the FR IIs on the ICM and find that heating due to adiabatic expansion is too small to offset radiative cooling by a factor of at least six. In contrast, the jet power is approximately an order of magnitude larger than required to counteract cooling. We conclude that if feedback from FR IIs offsets cooling of the ICM, then heating must be primarily due to another mechanism associated with FR II expansion.

  12. THE LIFETIME AND POWERS OF FR IIs IN GALAXY CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Antognini, Joe; Bird, Jonathan; Martini, Paul, E-mail: antognini@astronomy.ohio-state.edu, E-mail: bird@astronomy.ohio-state.edu, E-mail: martini@astronomy.ohio-state.edu [Department of Astronomy, Ohio State University, 140 W 18th Avenue, Columbus, OH 43210 (United States)

    2012-09-10

    We have identified and studied a sample of 151 FR IIs found in brightest cluster galaxies (BCGs) in the MaxBCG cluster catalog with data from FIRST and NVSS. We have compared the radio luminosities and projected lengths of these FR IIs to the projected length distribution of a range of mock catalogs generated by an FR II model and estimate the FR II lifetime to be 1.9 Multiplication-Sign 10{sup 8} yr. The uncertainty in the lifetime calculation is a factor of two, primarily due to uncertainties in the intracluster medium (ICM) density and the FR II axial ratio. We furthermore measure the jet power distribution of FR IIs in BCGs and find that it is well described by a log-normal distribution with a median power of 1.1 Multiplication-Sign 10{sup 37} W and a coefficient of variation of 2.2. These jet powers are nearly linearly related to the observed luminosities, and this relation is steeper than many other estimates, although it is dependent on the jet model. We investigate correlations between FR II and cluster properties and find that galaxy luminosity is correlated with jet power. This implies that jet power is also correlated with black hole mass, as the stellar luminosity of a BCG should be a good proxy for its spheroid mass and therefore the black hole mass. Jet power, however, is not correlated with cluster richness, nor is FR II lifetime strongly correlated with any cluster properties. We calculate the enthalpy of the lobes to examine the impact of the FR IIs on the ICM and find that heating due to adiabatic expansion is too small to offset radiative cooling by a factor of at least six. In contrast, the jet power is approximately an order of magnitude larger than required to counteract cooling. We conclude that if feedback from FR IIs offsets cooling of the ICM, then heating must be primarily due to another mechanism associated with FR II expansion.

  13. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type; Analisis de incertidumbre para resultados de codigos termohidraulicos de mejor estimacion

    Energy Technology Data Exchange (ETDEWEB)

    Alva N, J.

    2010-07-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  14. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  15. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  16. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  17. Improving the effectiveness of real-time flood forecasting through Predictive Uncertainty estimation: the multi-temporal approach

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Todini, Ezio

    2015-04-01

    The negative effects of severe flood events are usually contrasted through structural measures that, however, do not fully eliminate flood risk. Non-structural measures, such as real-time flood forecasting and warning, are also required. Accurate stage/discharge future predictions with appropriate forecast lead-time are sought by decision-makers for implementing strategies to mitigate the adverse effects of floods. Traditionally, flood forecasting has been approached by using rainfall-runoff and/or flood routing modelling. Indeed, both types of forecasts, cannot be considered perfectly representing future outcomes because of lacking of a complete knowledge of involved processes (Todini, 2004). Nonetheless, although aware that model forecasts are not perfectly representing future outcomes, decision makers are de facto implicitly assuming the forecast of water level/discharge/volume, etc. as "deterministic" and coinciding with what is going to occur. Recently the concept of Predictive Uncertainty (PU) was introduced in hydrology (Krzysztofowicz, 1999), and several uncertainty processors were developed (Todini, 2008). PU is defined as the probability of occurrence of the future realization of a predictand (water level/discharge/volume) conditional on: i) prior observations and knowledge, ii) the available information obtained on the future value, typically provided by one or more forecast models. Unfortunately, PU has been frequently interpreted as a measure of lack of accuracy rather than the appropriate tool allowing to take the most appropriate decisions, given a model or several models' forecasts. With the aim to shed light on the benefits for appropriately using PU, a multi-temporal approach based on the MCP approach (Todini, 2008; Coccia and Todini, 2011) is here applied to stage forecasts at sites along the Upper Tiber River. Specifically, the STAge Forecasting-Rating Curve Model Muskingum-based (STAFOM-RCM) (Barbetta et al., 2014) along with the Rating

  18. Transfer of Nuclear Data Uncertainties to the Uncertainties of Fuel Characteristic by Interval Calculation

    International Nuclear Information System (INIS)

    Ukraintsev, V.F.; Kolesov, V.V.

    2006-01-01

    Usually for evaluation of reactor functionals uncertainties, the perturbation theory and sensitivity analysis techniques are used. Of cause linearization approach of perturbation theory is used. This approach has several disadvantages and that is why a new method, based on application of a special interval calculations technique has been created. Basically, the problem of dependency of fuel cycle characteristic uncertainties from source group neutron cross-sections and decay parameters uncertainties can be solved (to some extent) as well by use of sensitivity analysis. However such procedure is rather labor consuming and does not give guaranteed estimations for received parameters since it works, strictly speaking, only for small deviations because it is initially based on linearization of the mathematical problems. The technique of fuel cycle characteristics uncertainties estimation is based on so-called interval analysis (or interval calculations). The basic advantage of this technique is the opportunity of deriving correct estimations. This technique consists in introducing a new special type of data such as Interval data in codes and the definition for them of all arithmetic operations. A technique of problem decision for system of linear equations (isotope kinetics) with use of interval arithmetic for the fuel burning up problem, has been realized. Thus there is an opportunity to compute a neutron flux, fission and capture cross-section uncertainties impact on nuclide concentration uncertainties and on fuel cycle characteristics (such as K eff , breeding ratio, decay heat power etc). By this time the code for interval calculation of burn-up computing has been developed and verified

  19. Measurement uncertainties for vacuum standards at Korea Research Institute of Standards and Science

    International Nuclear Information System (INIS)

    Hong, S. S.; Shin, Y. H.; Chung, K. H.

    2006-01-01

    The Korea Research Institute of Standards and Science has three major vacuum systems: an ultrasonic interferometer manometer (UIM) (Sec. II, Figs. 1 and 2) for low vacuum, a static expansion system (SES) (Sec. III, Figs. 3 and 4) for medium vacuum, and an orifice-type dynamic expansion system (DES) (Sec. IV, Figs. 5 and 6) for high and ultrahigh vacuum. For each system explicit measurement model equations with multiple variables are, respectively, given. According to ISO standards, all these system variable errors were used to calculate the expanded uncertainty (U). For each system the expanded uncertainties (k=1, confidence level=95%) and relative expanded uncertainty (expanded uncertainty/generated pressure) are summarized in Table IV and are estimated to be as follows. For UIM, at 2.5-300 Pa generated pressure, the expanded uncertainty is -2 Pa and the relative expanded uncertainty is -2 ; at 1-100 kPa generated pressure, the expanded uncertainty is -5 . For SES, at 3-100 Pa generated pressure, the expanded uncertainty is -1 Pa and the relative expanded uncertainty is -3 . For DES, at 4.6x10 -3 -1.3x10 -2 Pa generated pressure, the expanded uncertainty is -4 Pa and the relative expanded uncertainty is -3 ; at 3.0x10 -6 -9.0x10 -4 Pa generated pressure, the expanded uncertainty is -6 Pa and the relative expanded uncertainty is -2 . Within uncertainty limits our bilateral and key comparisons [CCM.P-K4 (10 Pa-1 kPa)] are extensive and in good agreement with those of other nations (Fig. 8 and Table V)

  20. Uncertainties in early-stage capital cost estimation of process design – a case study on biorefinery design

    DEFF Research Database (Denmark)

    Cheali, Peam; Gernaey, Krist; Sin, Gürkan

    2015-01-01

    Capital investment, next to the product demand, sales, and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early-stage design is a challenging task, which......) the Monte Carlo technique as an error propagation method based on expert input when cost data are not available. Four well-known models for early-stage cost estimation are reviewed and analyzed using the methodology. The significance of uncertainties of cost data for early-stage process design...

  1. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  2. Interactions between perceived uncertainty types in service dyads

    DEFF Research Database (Denmark)

    Kreye, Melanie

    2018-01-01

    to avoid business failure. A conceptual framework of four uncertainty types is investigated: environmental, technological, organisational, and relational uncertainty. We present insights from four empirical cases of service dyads collected via multiple sources of evidence including 54 semi-structured...... interviews, observations, and secondary data. The cases show seven interaction paths with direct knock-on effects between two uncertainty types and indirect knock-on effects between three or four uncertainty types. The findings suggest a causal chain from environmental, technological, organisational......, to relational uncertainty. This research contributes to the servitization literature by (i) con-firming the existence of uncertainty types, (ii) providing an in-depth characterisation of technological uncertainty, and (iii) showing the interaction paths between four uncertainty types in the form of a causal...

  3. Incorporating parametric uncertainty into population viability analysis models

    Science.gov (United States)

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  4. Estimate of the uncertainties in the relative risk of secondary malignant neoplasms following proton therapy and intensity-modulated photon therapy

    International Nuclear Information System (INIS)

    Fontenot, Jonas D; Bloch, Charles; Followill, David; Titt, Uwe; Newhauser, Wayne D

    2010-01-01

    Theoretical calculations have shown that proton therapy can reduce the incidence of radiation-induced secondary malignant neoplasms (SMN) compared with photon therapy for patients with prostate cancer. However, the uncertainties associated with calculations of SMN risk had not been assessed. The objective of this study was to quantify the uncertainties in projected risks of secondary cancer following contemporary proton and photon radiotherapies for prostate cancer. We performed a rigorous propagation of errors and several sensitivity tests to estimate the uncertainty in the ratio of relative risk (RRR) due to the largest contributors to the uncertainty: the radiation weighting factor for neutrons, the dose-response model for radiation carcinogenesis and interpatient variations in absorbed dose. The interval of values for the radiation weighting factor for neutrons and the dose-response model were derived from the literature, while interpatient variations in absorbed dose were taken from actual patient data. The influence of each parameter on a baseline RRR value was quantified. Our analysis revealed that the calculated RRR was insensitive to the largest contributors to the uncertainty. Uncertainties in the radiation weighting factor for neutrons, the shape of the dose-risk model and interpatient variations in therapeutic and stray doses introduced a total uncertainty of 33% to the baseline RRR calculation.

  5. Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals

    Science.gov (United States)

    Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre

    2017-01-01

    The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.

  6. Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...

  7. Methods for the calculation of uncertainty in analytical chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Sohn, S. C.; Park, Y. J.; Park, K. K.; Jee, K. Y.; Joe, K. S.; Kim, W. H

    2000-07-01

    This report describes the statistical rules for evaluating and expressing uncertainty in analytical chemistry. The procedures for the evaluation of uncertainty in chemical analysis are illustrated by worked examples. This report, in particular, gives guidance on how uncertainty can be estimated from various chemical analyses. This report can be also used for planning the experiments which will provide the information required to obtain an estimate of uncertainty for the method.

  8. Including uncertainty in hazard analysis through fuzzy measures

    International Nuclear Information System (INIS)

    Bott, T.F.; Eisenhawer, S.W.

    1997-12-01

    This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process

  9. Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty

    Science.gov (United States)

    Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.

    2012-12-01

    Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.

  10. Collision strengths for dipole-allowed transitions in S II

    International Nuclear Information System (INIS)

    Ho, Y.K.; Henry, R.J.W.

    1990-01-01

    Calculations of collision strengths for electron-impact excitations of S II from the ground state 3p3 4S0 to excited states 3p4 4P, 3d 4F, 3d 4D, 4s 4P, and 3d 4P were carried out using the R-matrix code described by Berrington et al. (1978) and the NIEM code described by Henry et al. (1981). Results are presented for the thermally averaged collision strengths for the five-state and six-state calculations. Convergence behaviors were examined by comparison with the six-state calculations and the previously obtained two-state calculations. Uncertainties for these transitions were estimated to be within 20 percent, except for the 4S0 - 3p4 4P transition in which a 40 percent uncertainty was estimated. 22 refs

  11. Uncertainties on lung doses from inhaled plutonium.

    Science.gov (United States)

    Puncher, Matthew; Birchall, Alan; Bull, Richard K

    2011-10-01

    In a recent epidemiological study, Bayesian uncertainties on lung doses have been calculated to determine lung cancer risk from occupational exposures to plutonium. These calculations used a revised version of the Human Respiratory Tract Model (HRTM) published by the ICRP. In addition to the Bayesian analyses, which give probability distributions of doses, point estimates of doses (single estimates without uncertainty) were also provided for that study using the existing HRTM as it is described in ICRP Publication 66; these are to be used in a preliminary analysis of risk. To infer the differences between the point estimates and Bayesian uncertainty analyses, this paper applies the methodology to former workers of the United Kingdom Atomic Energy Authority (UKAEA), who constituted a subset of the study cohort. The resulting probability distributions of lung doses are compared with the point estimates obtained for each worker. It is shown that mean posterior lung doses are around two- to fourfold higher than point estimates and that uncertainties on doses vary over a wide range, greater than two orders of magnitude for some lung tissues. In addition, we demonstrate that uncertainties on the parameter values, rather than the model structure, are largely responsible for these effects. Of these it appears to be the parameters describing absorption from the lungs to blood that have the greatest impact on estimates of lung doses from urine bioassay. Therefore, accurate determination of the chemical form of inhaled plutonium and the absorption parameter values for these materials is important for obtaining reliable estimates of lung doses and hence risk from occupational exposures to plutonium.

  12. Uncertainties in radioecological assessment models

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Miller, C.W.; Ng, Y.C.

    1983-01-01

    Environmental radiological assessments rely heavily on the use of mathematical models. The predictions of these models are inherently uncertain because models are inexact representations of real systems. The major sources of this uncertainty are related to bias in model formulation and imprecision in parameter estimation. The magnitude of uncertainty is a function of the questions asked of the model and the specific radionuclides and exposure pathways of dominant importance. It is concluded that models developed as research tools should be distinguished from models developed for assessment applications. Furthermore, increased model complexity does not necessarily guarantee increased accuracy. To improve the realism of assessment modeling, stochastic procedures are recommended that translate uncertain parameter estimates into a distribution of predicted values. These procedures also permit the importance of model parameters to be ranked according to their relative contribution to the overall predicted uncertainty. Although confidence in model predictions can be improved through site-specific parameter estimation and increased model validation, health risk factors and internal dosimetry models will probably remain important contributors to the amount of uncertainty that is irreducible. 41 references, 4 figures, 4 tables

  13. Uncertainty of fast biological radiation dose assessment for emergency response scenarios.

    Science.gov (United States)

    Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens

    2017-01-01

    Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.

  14. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  15. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  16. Phylogenetic uncertainty can bias the number of evolutionary transitions estimated from ancestral state reconstruction methods.

    Science.gov (United States)

    Duchêne, Sebastian; Lanfear, Robert

    2015-09-01

    Ancestral state reconstruction (ASR) is a popular method for exploring the evolutionary history of traits that leave little or no trace in the fossil record. For example, it has been used to test hypotheses about the number of evolutionary origins of key life-history traits such as oviparity, or key morphological structures such as wings. Many studies that use ASR have suggested that the number of evolutionary origins of such traits is higher than was previously thought. The scope of such inferences is increasing rapidly, facilitated by the construction of very large phylogenies and life-history databases. In this paper, we use simulations to show that the number of evolutionary origins of a trait tends to be overestimated when the phylogeny is not perfect. In some cases, the estimated number of transitions can be several fold higher than the true value. Furthermore, we show that the bias is not always corrected by standard approaches to account for phylogenetic uncertainty, such as repeating the analysis on a large collection of possible trees. These findings have important implications for studies that seek to estimate the number of origins of a trait, particularly those that use large phylogenies that are associated with considerable uncertainty. We discuss the implications of this bias, and methods to ameliorate it. © 2015 Wiley Periodicals, Inc.

  17. Comparison of two uncertainty dressing methods: SAD VS DAD

    Science.gov (United States)

    Chardon, Jérémy; Mathevet, Thibault; Le-Lay, Matthieu; Gailhard, Joël

    2014-05-01

    Hydrological Ensemble Prediction Systems (HEPSs) allow a better representation of meteorological and hydrological forecast uncertainties and improve human expertise of hydrological forecasts. An operational HEPS has been developed at EDF (French Producer of Electricity) since 2008 and is being used since 2010 on a hundred of watersheds in France. Depending on the hydro-meteorological situation, streamflow forecasts could be issued on a daily basis and are used to help dam management operations during floods or dam works within the river. A part of this HEPS is characterized by a streamflow ensemble post-processing, where a large human expertise is solicited. The aim of post-processing methods is to achieve better overall performances, by dressing hydrological ensemble forecasts with hydrological model uncertainties. The present study compares two post-processing methods, which are based on a logarithmic representation of the residuals distribution of the Rainfall-Runoff (RR) model, based on "perfect" forcing forecasts - i.e. forecasts with observed meteorological variables as inputs. The only difference between the two post-processing methods lies in the sampling of the perfect forcing forecasts for the estimation of the residuals statistics: (i) a first method, referred here as Statistical Analogy Dressing (SAD) model and used for operational HEPS, estimates beforehand the statistics of the residuals by streamflow sub-samples of quantile class and lead-time, since RR model residuals are not homoscedastic. (ii) an alternative method, referred as Dynamical Analogy Dressing (DAD) model, estimates the statistics of the residuals using the N most similar perfect forcing forecasts. The selection of this N forecasts is based on streamflow range and variation. On a set of 20 watersheds used for operational forecasts, both models were evaluated with perfect forcing forecasts and with ensemble forecasts. Results show that both approaches ensure a good post-processing of

  18. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  19. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  20. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    Science.gov (United States)

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  1. Estimating uncertainty and its temporal variation related to global climate models in quantifying climate change impacts on hydrology

    Science.gov (United States)

    Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua

    2018-01-01

    Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance

  2. Estimation of uncertainties in resonance parameters of {sup 56}Fe, {sup 239}Pu, {sup 240}Pu and {sup 238}U

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo; Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-05-01

    Uncertainties have been estimated for the resonance parameters of {sup 56}Fe, {sup 239}Pu, {sup 240}Pu and {sup 238}U contained in JENDL-3.2. Errors of the parameters were determined from the measurements which the evaluation was based on. The estimated errors have been compiled in the MF32 of the ENDF format. The numerical results are given in tables. (author)

  3. Uncertainty and inference in the world of paleoecological data

    Science.gov (United States)

    McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.

    2017-12-01

    Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a

  4. Sensitivity/uncertainty analysis for the Hiroshima dosimetry reevaluation effort

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Lillie, R.A.; Pace, J.V. III; Cacuci, D.G.

    1987-01-01

    Uncertainty estimates and cross correlations by range/survivor location have been obtained for the free-in-air (FIA) tissue kerma for the Hiroshima atomic event. These uncertainties in the FIA kerma include contributions due to various modeling parameters and the basic cross section data and are given at three ground ranges, 700, 1000 and 1500 m. The estimated uncertainties are nearly constant over the given ground ranges and are approximately 27% for the prompt neutron kerma and secondary gamma kerma and 35% for the prompt gamma kerma. The total kerma uncertainty is dominated by the secondary gamma kerma uncertainties which are in turn largely due to the modeling parameter uncertainties

  5. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Science.gov (United States)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector

  6. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    Directory of Open Access Journals (Sweden)

    E. Pakyuz-Charrier

    2018-04-01

    Full Text Available Three-dimensional (3-D geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors. Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE, a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than

  7. Systematic uncertainties in direct reaction theories

    International Nuclear Information System (INIS)

    Lovell, A E; Nunes, F M

    2015-01-01

    Nuclear reactions are common probes to study nuclei and in particular, nuclei at the limits of stability. The data from reaction measurements depend strongly on theory for a reliable interpretation. Even when using state-of-the-art reaction theories, there are a number of sources of systematic uncertainties. These uncertainties are often unquantified or estimated in a very crude manner. It is clear that for theory to be useful, a quantitative understanding of the uncertainties is critical. Here, we discuss major sources of uncertainties in a variety of reaction theories used to analyze (d,p) nuclear reactions in the energy range E d = 10–20 MeV, and we provide a critical view on how these have been handled in the past and how estimates can be improved. (paper)

  8. The uncertainty budget in pharmaceutical industry

    DEFF Research Database (Denmark)

    Heydorn, Kaj

    of their uncertainty, exactly as described in GUM [2]. Pharmaceutical industry has therefore over the last 5 years shown increasing interest in accreditation according to ISO 17025 [3], and today uncertainty budgets are being developed for all so-called critical measurements. The uncertainty of results obtained...... that the uncertainty of a particular result is independent of the method used for its estimation. Several examples of uncertainty budgets for critical parameters based on the bottom-up procedure will be discussed, and it will be shown how the top-down method is used as a means of verifying uncertainty budgets, based...

  9. The Uncertainty Multiplier and Business Cycles

    OpenAIRE

    Saijo, Hikaru

    2013-01-01

    I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use ...

  10. Monte Carlo estimation of neoclassical transport for the TJ-II stellarator

    International Nuclear Information System (INIS)

    Tribaldos, V.

    2001-01-01

    The neoclassical transport properties of TJ-II stellarator [C. Alejaldre et al., Fusion Technol. 13, 521 (1988)] are studied with the monoenergetic Monte Carlo technique. A compromise between the number of modes and particles and the required computing time to obtain reliable estimates, from the computational point of view, of the monoenergetic diffusion coefficients is shown to be of one thousand particles and one hundred harmonics, because of the rich magnetic-field structure of TJ-II. Although, these requirements are probably too demanding in making the transport estimations. The data base containing the normalized monoenergetic diffusion coefficient for several radial positions, radial electric fields and collisionalities have been fitted using a neural network. This fit reduces the number of points necessary in the data base and allows a smooth interpolation and extrapolation to perform the convolutions of the monoenergetic coefficients with the Maxwellian. For two different typical TJ-II discharges the ambipolar radial electric field, and the neoclassical particle and heat fluxes are presented both showing rather large positive radial electric fields at the plasma core and small negative fields at the edge. The neoclassical particle and energy confinement time are in surprisingly good agreement with the experimental energy balance analysis and the international stellarator scaling. Although no satisfactory explanation is available yet the large neoclassical diffusion caused by the complex ripple structure of TJ-II magnetic field may be an important ingredient

  11. Comment on ‘Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling’ by J. A. Lyn, M. H. Ramsey, A. P. Damant and R. Wood

    NARCIS (Netherlands)

    Geelhoed, B.

    2009-01-01

    Recently, Lyn et al. (Analyst, 2007, 132, 1231) compared two ways of estimating the standard uncertainty of sampling pistachio nuts for aflatoxins – a modelling method and an empirical method. Their case study used robust analysis of variance (RANOVA) to derive the uncertainty estimates,

  12. Background and Qualification of Uncertainty Methods

    International Nuclear Information System (INIS)

    D'Auria, F.; Petruzzi, A.

    2008-01-01

    The evaluation of uncertainty constitutes the necessary supplement of Best Estimate calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. The paper reviews the salient features of two independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely the propagation of code input error and the propagation of the calculation output error constitute the key-words for identifying the methods of current interest for industrial applications. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For the second case, the uncertainty method is coupled with the thermal-hydraulic code to get the Code with capability of Internal Assessment of Uncertainty, whose features are discussed in more detail.

  13. Effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model output

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study analyses the effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model's discharge estimates. Prediction uncertainty bounds are derived using the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation (at a single station within the catchment) and a precipitation factor FPi. Thus, these factors provide a simplified representation of the spatial variation of precipitation, specifically the shape of the functional relationship between precipitation and height. In the absence of information about appropriate values of the precipitation factors FPi, these are estimated through standard calibration procedures. The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. Monte Carlo samples of the model output are obtained by randomly varying the model parameters within their feasible ranges. In the first experiment, the precipitation factors FPi are considered unknown and thus included in the sampling process. The total number of unknown parameters in this case is 16. In the second experiment, precipitation factors FPi are estimated a priori, by means of a long term water balance between observed discharge at the catchment outlet, evapotranspiration estimates and observed precipitation. In this case, the number of unknown parameters reduces to 11. The feasible ranges assigned to the precipitation factors in the first experiment are slightly wider than the range of fixed precipitation factors used in the second experiment. The mean squared error of the Box-Cox transformed discharge during the calibration period is used for the evaluation of the

  14. Uncertainty analysis for secondary energy distributions

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1978-01-01

    In many transport calculations the integral design parameter of interest (response) is determined mainly by secondary particles such as gamma rays from (n,γ) reactions or secondary neutrons from inelastic scattering events or (n,2n) reactions. Standard sensitivity analysis usually allows to calculate the sensitivities to the production cross sections of such secondaries, but an extended formalism is needed to also obtain the sensitivities to the energy distribution of the generated secondary particles. For a 30-group standard cross-section set 84% of all non-zero table positions pertain to the description of secondary energy distributions (SED's) and only 16% to the actual reaction cross sections. Therefore, any sensitivity/uncertainty analysis which does not consider the effects of SED's is incomplete and neglects most of the input data. This paper describes the methods of how sensitivity profiles for SED's are obtained and used to estimate the uncertainty of an integral response due to uncertainties in these SED's. The detailed theory is documented elsewhere and implemented in the LASL sensitivity code SENSIT. SED sensitivity profiles have proven particularly valuable in cross-section uncertainty analyses for fusion reactors. Even when the production cross sections for secondary neutrons were assumed to be without error, the uncertainties in the energy distribution of these secondaries produced appreciable uncertainties in the calculated tritium breeding rate. However, complete error files for SED's are presently nonexistent. Therefore, methods will be described that allow rough error estimates due to estimated SED uncertainties based on integral SED sensitivities

  15. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  16. Supporting qualified database for V and V and uncertainty evaluation of best-estimate system codes

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.

    2014-01-01

    Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS- 52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QP' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering

  17. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  18. Estimation of radon progeny equilibrium factors and their uncertainty bounds using solid state nuclear track detectors

    International Nuclear Information System (INIS)

    Eappen, K.P.; Mayya, Y.S.; Patnaik, R.L.; Kushwaha, H.S.

    2006-01-01

    For the assessment of inhalation doses due to radon and its progeny to uranium mine workers, it is necessary to have information on the time integrated gas concentrations and equilibrium factors. Passive single cup dosimeters using solid state nuclear track detectors (SSNTD) are best suited for this purpose. These generally contain two SSNTDs, one placed inside the cup to measure only the radon gas concentration and other outside the cup for recording tracks due to both radon gas and the progeny species. However, since one obtains only two numbers by this method whereas information on four quantities is required for an unambiguous estimation of dose, there is a need for developing an optimal methodology for extracting information on the equilibrium factors. Several techniques proposed earlier have essentially been based on deterministic approaches, which do not fully take into account all the possible uncertainties in the environmental parameters. Keeping this in view, a simple 'mean of bounds' methodology is proposed to extract equilibrium factors based on their absolute bounds and the associated uncertainties as obtained from general arguments of radon progeny disequilibrium. This may be considered as reasonable estimates of the equilibrium factors in the absence of a knowledge of fluctuation in the environmental variables. The results are compared with those from direct measurements both in the laboratory and in real field situations. In view of the good agreement found between these, it is proposed that the simple mean of bounds estimate may be useful for practical applications in inhalation dosimetry of mine workers

  19. Host model uncertainties in aerosol radiative forcing estimates: results from the AeroCom Prescribed intercomparison study

    Directory of Open Access Journals (Sweden)

    P. Stier

    2013-03-01

    Full Text Available Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is −4.47 Wm−2 and the inter-model standard deviation is 0.55 Wm−2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm−2, and the standard deviation increases to 1.01 W−2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption is low, with absolute (relative standard deviations of 0.45 Wm−2 (8% clear-sky and 0.62 Wm−2 (11% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm−2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model

  20. Uncertainty in Measurement: Procedures for Determining Uncertainty With Application to Clinical Laboratory Calculations.

    Science.gov (United States)

    Frenkel, Robert B; Farrance, Ian

    2018-01-01

    The "Guide to the Expression of Uncertainty in Measurement" (GUM) is the foundational document of metrology. Its recommendations apply to all areas of metrology including metrology associated with the biomedical sciences. When the output of a measurement process depends on the measurement of several inputs through a measurement equation or functional relationship, the propagation of uncertainties in the inputs to the uncertainty in the output demands a level of understanding of the differential calculus. This review is intended as an elementary guide to the differential calculus and its application to uncertainty in measurement. The review is in two parts. In Part I, Section 3, we consider the case of a single input and introduce the concepts of error and uncertainty. Next we discuss, in the following sections in Part I, such notions as derivatives and differentials, and the sensitivity of an output to errors in the input. The derivatives of functions are obtained using very elementary mathematics. The overall purpose of this review, here in Part I and subsequently in Part II, is to present the differential calculus for those in the medical sciences who wish to gain a quick but accurate understanding of the propagation of uncertainties. © 2018 Elsevier Inc. All rights reserved.

  1. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    Science.gov (United States)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  2. Uncertainty estimation of analysis of Fe, Ca, Zr, Ba, La, Ti and Ce in sediment sample using XRF method

    International Nuclear Information System (INIS)

    Sukirno; Agus Taftazani

    2010-01-01

    An uncertainty of analysis of Fe, Ca, Zr, Ba, La, Ti and Ce in river sediment of Panfuran Wariness sample by X RF method has been done. The result value of testing is meaningless if it isn't completed without uncertainty value. The calculation of Ba metal have been presented for example. The aim of the research is to get accreditation certificate of X-Ray Fluorescence method on laboratory of analytical PTAPB – BATAN as well as ISO guide 17025-2005. The result of calculation uncertainty of Fe, Zr, Ba, La, Ce, Ti and Ca analysis showed that the uncertainty components come from: preparation of sample and standard/comparator, purity of material, counting statistic (sample and standard ) and repeatability. The results showed that metals in river sediment of Pancuran Wonosari were Fe = 7.290%, Zr = 54.5 mg/kg, Ba = 1661.6 mg/kg, La = 22.9 mg/kg, Ce = 161.0 mg/kg, Ti = 3193.2 and Ca = 7.816%, and the result of uncertainty estimate of Fe, Zr, Ba, La, Ce, Ti and Ca were ± 0.60%, ± 4.5 mg/kg, ± 55 mg/kg, ± 1.4 mg/kg, 12.0 mg/kg, ± 208 mg/kg and ± 0.61%. (author)

  3. Exchange Rate Volatility, Inflation Uncertainty and Foreign Direct ...

    African Journals Online (AJOL)

    This article examines the effect of exchange rate volatility and inflation uncertainty on foreign direct investment in Nigeria. The investigation covers the period between 1970 and 2005. Exchange rate volatility and inflation uncertainty were estimated using the GARCH model. Estimation results indicated that exchange rate ...

  4. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  5. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    propose an alternative strategy to determine the value of the cutoff threshold based on the appropriate coverage of the resulting uncertainty bounds. We demonstrate the superiority of this revised GLUE method with three different conceptual watershed models of increasing complexity, using both synthetic......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...... of applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models...

  6. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  7. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  8. Adjoint-Based Uncertainty Quantification with MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  9. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  10. The Uncertainty of Measurement Results

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)

  11. Uncertainty information in climate data records from Earth observation

    Science.gov (United States)

    Merchant, C. J.

    2017-12-01

    How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is

  12. Procedure and results of a systematic comparative round robin estimation of the uncertainties of emission measurements in Germany and Switzerland during the summer and fall 1998

    Energy Technology Data Exchange (ETDEWEB)

    Morkowski, J.S.

    2000-08-01

    During summer and fall 1998, 75 sample takers and 20 employees of laboratories of 26 institutes measuring emissions in Germany and Switzerland estimated the uncertainty of measuring results of 5 selected DVI-standard emission measuring methods. The applied method of systematic comparative estimation showed that the estimated total uncertainties U{sub T} of the investigated measuring methods range between 12% and 21%. The results of the round robin estimation for one of the methods (dust measuring 2066 Part 7) with U{sub T}=15.4% are equal to those of the system for simulation of emissions (HLfU Kassel) with 15.5%. This confirms the expectation of the initiators of the project that the systematic comparative estimation is useful for the characterization of test methods and hence for the validation of methods. (orig.)

  13. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  14. Using cost-benefit concepts in design floods improves communication of uncertainty

    Science.gov (United States)

    Ganora, Daniele; Botto, Anna; Laio, Francesco; Claps, Pierluigi

    2017-04-01

    Flood frequency analysis, i.e. the study of the relationships between the magnitude and the rarity of high flows in a river, is the usual procedure adopted to assess flood hazard, preliminary to the plan/design of flood protection measures. It grounds on the fit of a probability distribution to the peak discharge values recorded in gauging stations and the final estimates over a region are thus affected by uncertainty, due to the limited sample availability and of the possible alternatives in terms of the probabilistic model and the parameter estimation methods used. In the last decade, the scientific community dealt with this issue by developing a number of methods to quantify such uncertainty components. Usually, uncertainty is visually represented through confidence bands, which are easy to understand, but are not yet demonstrated to be useful for design purposes: they usually disorient decision makers, as the design flood is no longer univocally defined, making the decision process undetermined. These considerations motivated the development of the uncertainty-compliant design flood estimator (UNCODE) procedure (Botto et al., 2014) that allows one to select meaningful flood design values accounting for the associated uncertainty by considering additional constraints based on cost-benefit criteria. This method suggests an explicit multiplication factor that corrects the traditional (without uncertainty) design flood estimates to incorporate the effects of uncertainty in the estimate at the same safety level. Even though the UNCODE method was developed for design purposes, it can represent a powerful and robust tool to help clarifying the effects of the uncertainty in statistical estimation. As the process produces increased design flood estimates, this outcome demonstrates how uncertainty leads to more expensive flood protection measures, or insufficiency of current defenses. Moreover, the UNCODE approach can be used to assess the "value" of data, as the costs

  15. Understanding and reducing statistical uncertainties in nebular abundance determinations

    Science.gov (United States)

    Wesson, R.; Stock, D. J.; Scicluna, P.

    2012-06-01

    Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.

  16. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Science.gov (United States)

    Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...

  17. Space Radiation Cancer, Circulatory Disease and CNS Risks for Near Earth Asteroid and Mars Missions: Uncertainty Estimates for Never-Smokers

    Science.gov (United States)

    Cucinotta, Francis A.; Chappell, Lori J.; Wang, Minli; Kim, Myung-Hee

    2011-01-01

    The uncertainties in estimating the health risks from galactic cosmic rays (GCR) and solar particle events (SPE) are a major limitation to the length of space missions and the evaluation of potential risk mitigation approaches. NASA limits astronaut exposures to a 3% risk of exposure induced cancer death (REID), and protects against uncertainties in risks projections using an assessment of 95% confidence intervals after propagating the error from all model factors (environment and organ exposure, risk coefficients, dose-rate modifiers, and quality factors). Because there are potentially significant late mortality risks from diseases of the circulatory system and central nervous system (CNS) which are less well defined than cancer risks, the cancer REID limit is not necessarily conservative. In this report, we discuss estimates of lifetime risks from space radiation and new estimates of model uncertainties are described. The key updates to the NASA risk projection model are: 1) Revised values for low LET risk coefficients for tissue specific cancer incidence, with incidence rates transported to an average U.S. population to estimate the probability of Risk of Exposure Induced Cancer (REIC) and REID. 2) An analysis of smoking attributable cancer risks for never-smokers that shows significantly reduced lung cancer risk as well as overall cancer risks from radiation compared to risk estimated for the average U.S. population. 3) Derivation of track structure based quality functions depends on particle fluence, charge number, Z and kinetic energy, E. 4) The assignment of a smaller maximum in quality function for leukemia than for solid cancers. 5) The use of the ICRP tissue weights is shown to over-estimate cancer risks from SPEs by a factor of 2 or more. Summing cancer risks for each tissue is recommended as a more accurate approach to estimate SPE cancer risks. 6) Additional considerations on circulatory and CNS disease risks. Our analysis shows that an individual s

  18. Rain cell-based identification of the vertical profile of reflectivity as observed by weather radar and its use for precipitation uncertainty estimation

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Uijlenhoet, R.

    2012-04-01

    The wide scale implementation of weather radar systems over the last couple of decades has increased our understanding concerning spatio-temporal precipitation dynamics. However, the quantitative estimation of precipitation by these devices is affected by many sources of error. A very dominant source of error results from vertical variations in the hydrometeor size distribution known as the vertical profile of reflectivity (VPR). Since the height of the measurement as well as the beam volume increases with distance from the radar, for stratiform precipitation this results in a serious underestimation (overestimation) of the surface reflectivity while sampling within the snow (bright band) region. This research presents a precipitation cell-based implementation to correct volumetric weather radar measurements for VPR effects. Using the properties of a flipping carpenter square, a contour-based identification technique was developed, which is able to identify and track precipitation cells in real time, distinguishing between convective, stratiform and undefined precipitation. For the latter two types of systems, for each individual cell, a physically plausible vertical profile of reflectivity is estimated using a Monte Carlo optimization method. Since it can be expected that the VPR will vary within a given precipitation cell, a method was developed to take the uncertainty of the VPR estimate into account. As a result, we are able to estimate the amount of precipitation uncertainty as observed by weather radar due to VPR for a given precipitation type and storm cell. We demonstrate the possibilities of this technique for a number of winter precipitation systems observed within the Belgian Ardennes. For these systems, in general, the precipitation uncertainty estimate due to vertical reflectivity profile variations varies between 10-40%.

  19. Uncertainty propagation in urban hydrology water quality modelling

    NARCIS (Netherlands)

    Torres Matallana, Arturo; Leopold, U.; Heuvelink, G.B.M.

    2016-01-01

    Uncertainty is often ignored in urban hydrology modelling. Engineering practice typically ignores uncertainties and uncertainty propagation. This can have large impacts, such as the wrong dimensioning of urban drainage systems and the inaccurate estimation of pollution in the environment caused

  20. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  1. Model structural uncertainty quantification and hydrogeophysical data integration using airborne electromagnetic data (Invited)

    DEFF Research Database (Denmark)

    Minsley, Burke; Christensen, Nikolaj Kruse; Christensen, Steen

    of airborne electromagnetic (AEM) data to estimate large-scale model structural geometry, i.e. the spatial distribution of different lithological units based on assumed or estimated resistivity-lithology relationships, and the uncertainty in those structures given imperfect measurements. Geophysically derived...... estimates of model structural uncertainty are then combined with hydrologic observations to assess the impact of model structural error on hydrologic calibration and prediction errors. Using a synthetic numerical model, we describe a sequential hydrogeophysical approach that: (1) uses Bayesian Markov chain...... Monte Carlo (McMC) methods to produce a robust estimate of uncertainty in electrical resistivity parameter values, (2) combines geophysical parameter uncertainty estimates with borehole observations of lithology to produce probabilistic estimates of model structural uncertainty over the entire AEM...

  2. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    Science.gov (United States)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  3. Mama Software Features: Uncertainty Testing

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  4. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Laborda, Francisco; Medrano, Jesus; Castillo, Juan R.

    2004-01-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ

  5. Asphere cross testing: an exercise in uncertainty estimation

    Science.gov (United States)

    Murphy, Paul E.

    2017-10-01

    Aspheric surfaces can provide substantial improvements to optical designs, but they can also be difficult to manufacture cost-effectively. Asphere metrology contributes significantly to this difficulty, especially for high-precision aspheric surfaces. With the advent of computer-controlled fabrication machinery, optical surface quality is chiefly limited by the ability to measure it. Consequently, understanding the uncertainty of surface measurements is of great importance for determining what optical surface quality can be achieved. We measured sample aspheres using multiple techniques: profilometry, null interferometry, and subaperture stitching. We also obtained repeatability and reproducibility (R&R) measurement data by retesting the same aspheres under various conditions. We highlight some of the details associated with the different measurement techniques, especially efforts to reduce bias in the null tests via calibration. We compare and contrast the measurement results, and obtain an empirical view of the measurement uncertainty of the different techniques. We found fair agreement in overall surface form among the methods, but meaningful differences in reproducibility and mid-spatial frequency performance.

  6. Quantile uncertainty and value-at-risk model risk.

    Science.gov (United States)

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  7. Uncertainty analysis techniques

    International Nuclear Information System (INIS)

    Marivoet, J.; Saltelli, A.; Cadelli, N.

    1987-01-01

    The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site

  8. Estimation of a Reactor Core Power Peaking Factor Using Support Vector Regression and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Bae, In Ho; Naa, Man Gyun; Lee, Yoon Joon; Park, Goon Cherl

    2009-01-01

    The monitoring of detailed 3-dimensional (3D) reactor core power distribution is a prerequisite in the operation of nuclear power reactors to ensure that various safety limits imposed on the LPD and DNBR, are not violated during nuclear power reactor operation. The LPD and DNBR should be calculated in order to perform the two major functions of the core protection calculator system (CPCS) and the core operation limit supervisory system (COLSS). The LPD at the hottest part of a hot fuel rod, which is related to the power peaking factor (PPF, F q ), is more important than the LPD at any other position in a reactor core. The LPD needs to be estimated accurately to prevent nuclear fuel rods from melting. In this study, support vector regression (SVR) and uncertainty analysis have been applied to estimation of reactor core power peaking factor

  9. Uncertainty of Forest Biomass Estimates in North Temperate Forests Due to Allometry: Implications for Remote Sensing

    Directory of Open Access Journals (Sweden)

    Razi Ahmed

    2013-06-01

    Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.

  10. Mitigating Provider Uncertainty in Service Provision Contracts

    Science.gov (United States)

    Smith, Chris; van Moorsel, Aad

    Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.

  11. Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount

    Energy Technology Data Exchange (ETDEWEB)

    Gervasio, Vivianaluxa [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kim, Dong-Sang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kruger, Albert A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2018-02-19

    Analyses were performed to evaluate the impacts of using the advanced glass models, constraints (Vienna et al. 2016), and uncertainty descriptions on projected Hanford glass mass. The maximum allowable WOL was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of IHLW glass when no uncertainties were taken into accound. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increase in estimated glass mass 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). ILAW mass was predicted to be 282,350 MT without uncertainty and with weaste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MTG. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.

  12. Macro Expectations, Aggregate Uncertainty, and Expected Term Premia

    DEFF Research Database (Denmark)

    Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas

    2013-01-01

    as well as aggregate macroeconomic uncertainty at the level of individual forecasters. We find that expected term premia are (i) time-varying and reasonably persistent, (ii) strongly related to expectations about future output growth, and (iii) positively affected by uncertainty about future output growth...... and in ation rates. Expectations about real macroeconomic variables seem to matter more than expectations about nominal factors. Additional findings on term structure factors suggest that the level and slope factor capture information related to uncertainty about real and nominal macroeconomic prospects...

  13. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  14. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  15. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  16. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  17. Basic Aspects of Uncertainty II. The Inverse Estimations

    Czech Academy of Sciences Publication Activity Database

    Vrba, Josef

    2002-01-01

    Roč. 42, - (2002), s. 1253-1269 ISSN 0232-9298 Institutional research plan: CEZ:AV0Z4072921 Keywords : membership asymmetry transfer * recycle process Subject RIV: CI - Industrial Chemistry, Chemical Engineering

  18. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  19. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    Science.gov (United States)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  20. Quantifying Uncertainty in Soil Volume Estimates

    International Nuclear Information System (INIS)

    Roos, A.D.; Hays, D.C.; Johnson, R.L.; Durham, L.A.; Winters, M.

    2009-01-01

    Proper planning and design for remediating contaminated environmental media require an adequate understanding of the types of contaminants and the lateral and vertical extent of contamination. In the case of contaminated soils, this generally takes the form of volume estimates that are prepared as part of a Feasibility Study for Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites and/or as part of the remedial design. These estimates are typically single values representing what is believed to be the most likely volume of contaminated soil present at the site. These single-value estimates, however, do not convey the level of confidence associated with the estimates. Unfortunately, the experience has been that pre-remediation soil volume estimates often significantly underestimate the actual volume of contaminated soils that are encountered during the course of remediation. This underestimation has significant implications, both technically (e.g., inappropriate remedial designs) and programmatically (e.g., establishing technically defensible budget and schedule baselines). Argonne National Laboratory (Argonne) has developed a joint Bayesian/geostatistical methodology for estimating contaminated soil volumes based on sampling results, that also provides upper and lower probabilistic bounds on those volumes. This paper evaluates the performance of this method in a retrospective study that compares volume estimates derived using this technique with actual excavated soil volumes for select Formerly Utilized Sites Remedial Action Program (FUSRAP) Maywood properties that have completed remedial action by the U.S. Army Corps of Engineers (USACE) New York District. (authors)