WorldWideScience

Sample records for ii uncertainty estimation

  1. Effects of systematic error, estimates and uncertainties in chemical mass balance apportionments: Quail Roost II revisited

    Science.gov (United States)

    Lowenthal, Douglas H.; Hanumara, R. Choudary; Rahn, Kenneth A.; Currie, Lloyd A.

    The Quail Roost II synthetic data set II was used to derive a comprehensive method of estimating uncertainties for chemical mass balance (CMB) apportionments. Collinearity-diagnostic procedures were applied to CMB apportionments of data set II to identify seriously collinear source profiles and evaluate the effects of the degree of collinearity on source-strength estimates and their uncertainties. Fractional uncertainties of CMB estimates were up to three times higher for collinear source profiles than for independent ones. A theoretical analysis of CMB results for synthetic data set II led to the following general conclusions about CMB methodology. Uncertainties for average estimated source strengths will be unrealistically low unless sources whose estimates are constrained to zero are included when calculating uncertainties. Covariance in source-strength estimates is caused by collinearity and systematic errors in source specification and composition. Propagated uncertainties may be underestimated unless covariances as well as variances of estimates are included. Apportioning the average aerosol will account for systematic errors only when the correct model is known, when measurement uncertainties in ambient and source-profile data are realistic, and when the source profiles are not collinear.

  2. Earthquake Loss Estimation Uncertainties

    Science.gov (United States)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  3. Uncertainties in Site Amplification Estimation

    Science.gov (United States)

    Cramer, C. H.; Bonilla, F.; Hartzell, S.

    2004-12-01

    Typically geophysical profiles (layer thickness, velocity, density, Q) and dynamic soil properties (modulus and damping versus strain curves) are used with appropriate input ground motions in a soil response computer code to estimate site amplification. Uncertainties in observations can be used to generate a distribution of possible site amplifications. The biggest sources of uncertainty in site amplifications estimates are the uncertainties in (1) input ground motions, (2) shear-wave velocities (Vs), (3) dynamic soil properties, (4) soil response code used, and (5) dynamic pore pressure effects. A study of site amplification was conducted for the 1 km thick Mississippi embayment sediments beneath Memphis, Tennessee (see USGS OFR 04-1294 on the web). In this study, the first three sources of uncertainty resulted in a combined coefficient of variation of 10 to 60 percent. The choice of soil response computer program can lead to uncertainties in median estimates of +/- 50 percent. Dynamic pore pressure effects due to the passing of seismic waves in saturated soft sediments are normally not considered in site-amplification studies and can contribute further large uncertainties in site amplification estimates. The effects may range from dilatancy and high-frequency amplification (such as observed at some sites during the 1993 Kushiro-Oki, Japan and 2001 Nisqually, Washington earthquakes) or general soil failure and deamplification of ground motions (such as observed at Treasure Island during the 1989 Loma Prieta, California earthquake). Examples of two case studies using geotechnical data for downhole arrays in Kushiro, Japan and the Wildlife Refuge, California using one dynamic code, NOAH, will be presented as examples of modeling uncertainties associated with these effects. Additionally, an example of inversion for estimates of in-situ dilatancy-related geotechnical modeling parameters will be presented for the Kushiro, Japan site.

  4. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available Resolution Test Objects, Graphic Arts Research Center, Rochester Institute of Tech- nology, Rochester, NY H208491977H20850. 10. ISO/IEC 17025:1999, ?General requirements for the competence of testing and calibration laboratories.? 11. I. Miller and J. E... stream_source_info goncalves_2006.pdf.txt stream_content_type text/plain stream_size 28641 Content-Encoding ISO-8859-1 stream_name goncalves_2006.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Estimating uncertainty...

  5. Estimating uncertainties in complex joint inverse problems

    Science.gov (United States)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  6. Estimating discharge measurement uncertainty using the interpolated variance estimator

    Science.gov (United States)

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  7. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  8. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  9. Estimating the uncertainty in underresolved nonlinear dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Chorin, Alelxandre; Hald, Ole

    2013-06-12

    The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.

  10. Modeling Uncertainty when Estimating IT Projects Costs

    OpenAIRE

    Winter, Michel; Mirbel, Isabelle; Crescenzo, Pierre

    2014-01-01

    In the current economic context, optimizing projects' cost is an obligation for a company to remain competitive in its market. Introducing statistical uncertainty in cost estimation is a good way to tackle the risk of going too far while minimizing the project budget: it allows the company to determine the best possible trade-off between estimated cost and acceptable risk. In this paper, we present new statistical estimators derived from the way IT companies estimate the projects' costs. In t...

  11. Uncertainty Analysis in the Noise Parameters Estimation

    Directory of Open Access Journals (Sweden)

    Pawlik P.

    2012-07-01

    Full Text Available The new approach to the uncertainty estimation in modelling acoustic hazards by means of the interval arithmetic is presented in the paper. In the case of the noise parameters estimation the selection of parameters specifying the acoustic wave propagation in an open space as well as parameters which are required in a form of average values – often constitutes a difficult problem. In such case, it is necessary to determine the variance and then, related strictly to it, the uncertainty of model parameters. The application of the interval arithmetic formalism allows to estimate the input data uncertainties without the necessity of the determination their probability distribution, which is required by other methods of uncertainty assessment. A successive problem in the acoustic hazards estimation is a lack of the exact knowledge of the input parameters. In connection with the above, the analysis of the modelling uncertainty in dependence of inaccuracy of model parameters was performed. To achieve this aim the interval arithmetic formalism – representing the value and its uncertainty in a form of an interval – was applied. The proposed approach was illustrated by the example of the application the Dutch RMR SRM Method, recommended by the European Union Directive 2002/49/WE, in the railway noise modelling.

  12. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  13. Estimation of Modal Parameters and their Uncertainties

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune

    1999-01-01

    In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...

  14. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  15. DPRESS: Localizing estimates of predictive uncertainty

    Directory of Open Access Journals (Sweden)

    Clark Robert D

    2009-07-01

    Full Text Available Abstract Background The need to have a quantitative estimate of the uncertainty of prediction for QSAR models is steadily increasing, in part because such predictions are being widely distributed as tabulated values disconnected from the models used to generate them. Classical statistical theory assumes that the error in the population being modeled is independent and identically distributed (IID, but this is often not actually the case. Such inhomogeneous error (heteroskedasticity can be addressed by providing an individualized estimate of predictive uncertainty for each particular new object u: the standard error of prediction su can be estimated as the non-cross-validated error st* for the closest object t* in the training set adjusted for its separation d from u in the descriptor space relative to the size of the training set. The predictive uncertainty factor γt* is obtained by distributing the internal predictive error sum of squares across objects in the training set based on the distances between them, hence the acronym: Distributed PRedictive Error Sum of Squares (DPRESS. Note that st* and γt*are characteristic of each training set compound contributing to the model of interest. Results The method was applied to partial least-squares models built using 2D (molecular hologram or 3D (molecular field descriptors applied to mid-sized training sets (N = 75 drawn from a large (N = 304, well-characterized pool of cyclooxygenase inhibitors. The observed variation in predictive error for the external 229 compound test sets was compared with the uncertainty estimates from DPRESS. Good qualitative and quantitative agreement was seen between the distributions of predictive error observed and those predicted using DPRESS. Inclusion of the distance-dependent term was essential to getting good agreement between the estimated uncertainties and the observed distributions of predictive error. The uncertainty estimates derived by DPRESS were

  16. Uncertainties in the estimation of max

    Indian Academy of Sciences (India)

    Girish C Joshi; Mukat Lal Sharma

    2008-11-01

    In the present paper, the parameters affecting the uncertainties on the estimation of max have been investigated by exploring different methodologies being used in the analysis of seismicity catalogue and estimation of seismicity parameters. A critical issue to be addressed before any scientific analysis is to assess the quality, consistency, and homogeneity of the data. The empirical relationships between different magnitude scales have been used for conversions for homogenization of seismicity catalogues to be used for further seismic hazard assessment studies. An endeavour has been made to quantify the uncertainties due to magnitude conversions and the seismic hazard parameters are then estimated using different methods to consider the epistemic uncertainty in the process. The study area chosen is around Delhi. The value and the magnitude of completeness for the four seismogenic sources considered around Delhi varied more than 40% using the three catalogues compiled based on different magnitude conversion relationships. The effect of the uncertainties has been then shown on the estimation of max and the probabilities of occurrence of different magnitudes. It has been emphasized to consider the uncertainties and their quantification to carry out seismic hazard assessment and in turn the seismic microzonation.

  17. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  18. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  19. Climate wavelet spectrum estimation under chronology uncertainties

    Science.gov (United States)

    Lenoir, G.; Crucifix, M.

    2012-04-01

    Several approaches to estimate the chronology of palaeoclimate records exist in the literature: simple interpolation between the tie points, orbital tuning, alignment on other data... These techniques generate a single estimate of the chronology. More recently, statistical generators of chronologies have appeared (e.g. OXCAL, BCHRON) allowing the construction of thousands of chronologies given the tie points and their uncertainties. These techniques are based on advanced statistical methods. They allow one to take into account the uncertainty of the timing of each climatic event recorded into the core. On the other hand, when interpreting the data, scientists often rely on time series analysis, and especially on spectral analysis. Given that paleo-data are composed of a large spectrum of frequencies, are non-stationary and are highly noisy, the continuous wavelet transform turns out to be a suitable tool to analyse them. The wavelet periodogram, in particular, is helpful to interpret visually the time-frequency behaviour of the data. Here, we combine statistical methods to generate chronologies with the power of continuous wavelet transform. Some interesting applications then come up: comparison of time-frequency patterns between two proxies (extracted from different cores), between a proxy and a statistical dynamical model, and statistical estimation of phase-lag between two filtered signals. All these applications consider explicitly the uncertainty in the chronology. The poster presents mathematical developments on the wavelet spectrum estimation under chronology uncertainties as well as some applications to Quaternary data based on marine and ice cores.

  20. Uncertainty estimation by convolution using spatial statistics.

    Science.gov (United States)

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  1. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  2. Estimation of uncertainty for fatigue growth rate at cryogenic temperatures

    Science.gov (United States)

    Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.

    2014-01-01

    Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.

  3. Estimating Uncertainties in Statistics Computed from DNS

    Science.gov (United States)

    Malaya, Nicholas; Oliver, Todd; Ulerich, Rhys; Moser, Robert

    2013-11-01

    Rigorous assessment of uncertainty is crucial to the utility of DNS results. Uncertainties in the computed statistics arise from two sources: finite sampling and the discretization of the Navier-Stokes equations. Due to the presence of non-trivial sampling error, standard techniques for estimating discretization error (such as Richardson Extrapolation) fail or are unreliable. This talk provides a systematic and unified approach for estimating these errors. First, a sampling error estimator that accounts for correlation in the input data is developed. Then, this sampling error estimate is used as an input to a probabilistic extension of Richardson extrapolation in order to characterize the discretization error. These techniques are used to investigate the sampling and discretization errors in the DNS of a wall-bounded turbulent flow at Reτ = 180. We will show a well-resolved DNS simulation which, for the centerline velocity, possesses 0.02% sampling error and discretization errors of 0.003%. These results imply that standard resolution heuristics for DNS accurately predict required grid sizes. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  4. Uncertainty relation based on unbiased parameter estimations

    Science.gov (United States)

    Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing

    2017-02-01

    Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.

  5. Uncertainty estimation in finite fault inversion

    Science.gov (United States)

    Dettmer, Jan; Cummins, Phil R.; Benavente, Roberto

    2016-04-01

    This work considers uncertainty estimation for kinematic rupture models in finite fault inversion by Bayesian sampling. Since the general problem of slip estimation on an unknown fault from incomplete and noisy data is highly non-linear and currently intractable, assumptions are typically made to simplify the problem. These almost always include linearization of the time dependence of rupture by considering multiple discrete time windows, and a tessellation of the fault surface into a set of 'subfaults' whose dimensions are fixed below what is subjectively thought to be resolvable by the data. Even non-linear parameterizations are based on a fixed discretization. This results in over-parametrized models which include more parameters than resolvable by the data and require regularization criteria that stabilize the inversion. While it is increasingly common to consider slip uncertainties arising from observational error, the effects of the assumptions implicit in parameterization choices are rarely if ever considered. Here, we show that linearization and discretization assumptions can strongly affect both slip and uncertainty estimates and that therefore the selection of parametrizations should be included in the inference process. We apply Bayesian model selection to study the effect of parametrization choice on inversion results. The Bayesian sampling method which produces inversion results is based on a trans-dimensional rupture discretization which adapts the spatial and temporal parametrization complexity based on data information and does not require regularization. Slip magnitude, direction and rupture velocity are unknowns across the fault and causal first rupture times are obtained by solving the Eikonal equation for a spatially variable rupture-velocity field. The method provides automated local adaptation of rupture complexity based on data information and does not assume globally constant resolution. This is an important quality since seismic data do not

  6. Estimating real-time predictive hydrological uncertainty

    NARCIS (Netherlands)

    Verkade, J.S.

    2015-01-01

    Flood early warning systems provide a potentially highly effective flood risk reduction measure. The effectiveness of early warning, however, is affected by forecasting uncertainty: the impossibility of knowing, in advance, the exact future state of hydrological systems. Early warning systems

  7. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...

  8. Estimated Frequency Domain Model Uncertainties used in Robust Controller Design

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob;

    1994-01-01

    This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...

  9. Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.

    Directory of Open Access Journals (Sweden)

    Elise Payzan-LeNestour

    Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

  10. Traceability and uncertainty estimation in coordinate metrology

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Savio, Enrico; De Chiffre, Leonardo

    2001-01-01

    National and international standards have defined performance verification procedures for coordinate measuring machines (CMMs) that typically involve their ability to measure calibrated lengths and to a certain extent form. It is recognised that, without further analysis or testing, these results...... are insufficient to determine the task specific uncertainty of most measurements. Therefore, performance verification methods defined in current standards do not guarantee traceability of measurements performed with a CMM for all measurement tasks, and procedures for the assessment of task-related uncertainties...... are required. Depending on the requirements for uncertainty level, different approaches may be adopted to achieve traceability. Especially in the case of complex measurement situations and workpieces the procedures are not trivial. This paper discusses the establishment of traceability in coordinate metrology...

  11. Traceability and uncertainty estimation in coordinate metrology

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Savio, Enrico; De Chiffre, Leonardo

    2001-01-01

    are required. Depending on the requirements for uncertainty level, different approaches may be adopted to achieve traceability. Especially in the case of complex measurement situations and workpieces the procedures are not trivial. This paper discusses the establishment of traceability in coordinate metrology...

  12. Uncertainty in Forest Net Present Value Estimations

    Directory of Open Access Journals (Sweden)

    Ilona Pietilä

    2010-09-01

    Full Text Available Uncertainty related to inventory data, growth models and timber price fluctuation was investigated in the assessment of forest property net present value (NPV. The degree of uncertainty associated with inventory data was obtained from previous area-based airborne laser scanning (ALS inventory studies. The study was performed, applying the Monte Carlo simulation, using stand-level growth and yield projection models and three alternative rates of interest (3, 4 and 5%. Timber price fluctuation was portrayed with geometric mean-reverting (GMR price models. The analysis was conducted for four alternative forest properties having varying compartment structures: (A a property having an even development class distribution, (B sapling stands, (C young thinning stands, and (D mature stands. Simulations resulted in predicted yield value (predicted NPV distributions at both stand and property levels. Our results showed that ALS inventory errors were the most prominent source of uncertainty, leading to a 5.1–7.5% relative deviation of property-level NPV when an interest rate of 3% was applied. Interestingly, ALS inventory led to significant biases at the property level, ranging from 8.9% to 14.1% (3% interest rate. ALS inventory-based bias was the most significant in mature stand properties. Errors related to the growth predictions led to a relative standard deviation in NPV, varying from 1.5% to 4.1%. Growth model-related uncertainty was most significant in sapling stand properties. Timber price fluctuation caused the relative standard deviations ranged from 3.4% to 6.4% (3% interest rate. The combined relative variation caused by inventory errors, growth model errors and timber price fluctuation varied, depending on the property type and applied rates of interest, from 6.4% to 12.6%. By applying the methodology described here, one may take into account the effects of various uncertainty factors in the prediction of forest yield value and to supply the

  13. Uncertainty Estimates for Theoretical Atomic and Molecular Data

    CERN Document Server

    Chung, H -K; Bartschat, K; Csaszar, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structure and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering.

  14. Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds

    Science.gov (United States)

    We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...

  15. Validation and Implementation of Uncertainty Estimates of Calculated Transition Rates

    Directory of Open Access Journals (Sweden)

    Jörgen Ekman

    2014-05-01

    Full Text Available Uncertainties of calculated transition rates in LS-allowed electric dipole transitions in boron-like O IV and carbon-like Fe XXI are estimated using an approach in which differences in line strengths calculated in length and velocity gauges are utilized. Estimated uncertainties are compared and validated against several high-quality theoretical data sets in O IV, and implemented in large scale calculations in Fe XXI.

  16. Uncertainty principle estimates for vector fields

    OpenAIRE

    Pérez Moreno, Carlos; Wheeden, Richard L.

    2001-01-01

    We derive weighted norm estimates for integral operators of potential type and for their related maximal operators. These operators are generalizations of the classical fractional integrals and fractional maximal functions. The norm estimates are derived in the context of a space of homogeneous type. The conditions required of the weight functions involve generalizations of the Fefferman-Phong "r-bump" condition. The results improve some earlier ones of the same kind, and they also extend to ...

  17. ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES

    Energy Technology Data Exchange (ETDEWEB)

    Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)

    2012-05-20

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  18. On the Estimation of Systematic Uncertainties of Star Formation Histories

    CERN Document Server

    Dolphin, Andrew E

    2012-01-01

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. Effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  19. Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Sadeghi Sarcheshmah

    2012-01-01

    Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.

  20. Estimation of measurement uncertainty arising from manual sampling of fuels.

    Science.gov (United States)

    Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios

    2013-02-15

    Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty.

  1. Estimation and uncertainty of reversible Markov models

    CERN Document Server

    Trendelkamp-Schroer, Benjamin; Paul, Fabian; Noé, Frank

    2015-01-01

    Reversibility is a key concept in the theory of Markov models, simplified kinetic models for the conforma- tion dynamics of molecules. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model relies heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is therefore crucial to the successful application of the previously developed theory. In this work we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference.

  2. Optimizing step gauge measurements and uncertainties estimation

    Science.gov (United States)

    Hennebelle, F.; Coorevits, T.; Vincent, R.

    2017-02-01

    According to the standard ISO 10360-2 (2001 Geometrical product specifications (GPS)—acceptance and reverification tests for coordinate measuring machines (CMM)—part 2: CMMs used for measuring size (ISO 10360-2:2001)), we verify the coordinate measuring machine (CMM) performance against the manufacturer specification. There are many types of gauges used for the calibration and verification of CMMs. The step gauges with parallel faces (KOBA, MITUTOYO) are well known gauges to perform this test. Often with these gauges, only the unidirectional measurements are considered which avoids having to deal with a residual error that affects the tip radius compensation. However the ISO 10360-2 standard imposes the use of a bidirectional measurement. Thus, the bidirectional measures must be corrected by the residual constant offset probe. In this paper, we optimize the step gauge measurement and a method is given to mathematically avoid the problem of the constant offset of the tip radius. This method involves measuring the step gauge once and to measure it a second time with a shift of one slot in order to obtain a new set of equations. Uncertainties are also presented.

  3. Estimates of bias and uncertainty in recorded external dose

    Energy Technology Data Exchange (ETDEWEB)

    Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.

    1994-10-01

    A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements.

  4. Using dynamical uncertainty models estimating uncertainty bounds on power plant performance prediction

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.

    2007-01-01

    Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models of th...... models, is applied to two different sets of measured plant data. The computed uncertainty bounds cover the measured plant output, while the nominal prediction is outside these uncertainty bounds for some samples in these examples.  ......Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models...... of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...

  5. Estimation of sedimentary proxy records together with associated uncertainty

    Directory of Open Access Journals (Sweden)

    B. Goswami

    2014-06-01

    Full Text Available Sedimentary proxy records constitute a significant portion of the recorded evidence that allow us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record inter-comparisons. While there are various age-modeling approaches to improve the estimation of the age-depth relations of archives, relatively less focus has been given to the propagation of the age (and radiocarbon calibration uncertainties into the final proxy record. We present a generic Bayesian framework to estimate proxy records along with their associated uncertainty starting with the radiometric age-depth and proxy-depth measurements, and a radiometric calibration curve if required. We provide analytical expressions for the posterior proxy probability distributions at any given calendar age, from which the expected proxy values and their uncertainty can be estimated. We illustrate our method using two synthetic datasets and then use it to construct the proxy records for groundwater inflow and surface erosion from Lonar lake in central India. Our analysis reveals interrelations between the uncertainty of the proxy record over time and the variance of proxy along the depth of the archive. For the Lonar lake proxies, we show that, rather than the age uncertainties, it is the proxy variance combined with calibration uncertainty that accounts for most of the final uncertainty. We represent the proxy records as probability distributions on a precise, error-free time scale that makes further time series analyses and inter-comparison of proxies relatively simpler and clearer. Our approach provides a coherent understanding of age uncertainties within sedimentary proxy records that involve radiometric dating. It can be potentially used within existing age modeling structures to bring forth a reliable and consistent framework for proxy record estimation.

  6. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  7. Imprecise probabilistic estimation of design floods with epistemic uncertainties

    Science.gov (United States)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-06-01

    An imprecise probabilistic framework for design flood estimation is proposed on the basis of the Dempster-Shafer theory to handle different epistemic uncertainties from data, probability distribution functions, and probability distribution parameters. These uncertainties are incorporated in cost-benefit analysis to generate the lower and upper bounds of the total cost for flood control, thus presenting improved information for decision making on design floods. Within the total cost bounds, a new robustness criterion is proposed to select a design flood that can tolerate higher levels of uncertainty. A variance decomposition approach is used to quantify individual and interactive impacts of the uncertainty sources on total cost. Results from three case studies, with 127, 104, and 54 year flood data sets, respectively, show that the imprecise probabilistic approach effectively combines aleatory and epistemic uncertainties from the various sources and provides upper and lower bounds of the total cost. Between the total cost and the robustness of design floods, a clear trade-off which is beyond the information that can be provided by the conventional minimum cost criterion is identified. The interactions among data, distributions, and parameters have a much higher contribution than parameters to the estimate of the total cost. It is found that the contributions of the various uncertainty sources and their interactions vary with different flood magnitude, but remain roughly the same with different return periods. This study demonstrates that the proposed methodology can effectively incorporate epistemic uncertainties in cost-benefit analysis of design floods.

  8. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  9. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  10. Uncertainty estimation of continuous in-situ greenhouse gas observation

    Science.gov (United States)

    Karion, A.; Verhulst, K. R.; Kim, J.; Sloop, C.; Salameh, P.; Ghosh, S.

    2016-12-01

    Global trends in urbanization have focused community interest in urban greenhouse gas (GHG) emissions, leading to many recent studies on GHG emissions from cities. Many efforts to quantify urban GHG emissions have focused on establishing long-term, relatively dense networks of tower or roof-based continuous in-situ GHG observations. Here we introduce in-situ measurements from a network of tower sites in the Washington DC and Baltimore urban regions (NorthEast Corridor), designed specifically for use in atmospheric inversions to determine fossil-fuel emissions of carbon dioxide (Lopez-Coto et al, in review). Such flux estimation techniques rely on an understanding of the uncertainty associated with each observation, however, and how this uncertainty changes with concentration or site conditions. We have developed an uncertainty estimation method for continuous measurements made using the Earth Networks, Inc. GHG observing system, based on Picarro CRDS analyzers and GCWerks processing software. We find that the largest uncertainty component is due to the extrapolation of the calibration based on one reference gas standard, and that this uncertainty component is linearly dependent on the measured mole fraction. The uncertainty estimation has been developed for and applied to the LA Megacities project (Verhulst et al., in prep) and the NorthEast Corridor measurements, but can also be applied at other sites across the US. Establishing robust uncertainty estimates for these GHG observations relative to the WMO scales will allow these data to be incorporated in atmospheric inversion models along with other continental and global observations. *Certain commercial equipment is identified in this work in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.

  11. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  12. Adaptive Sliding Mode Control Based on Uncertainty and Disturbance Estimator

    Directory of Open Access Journals (Sweden)

    Yue Zhu

    2014-01-01

    Full Text Available This paper presents an original adaptive sliding mode control strategy for a class of nonlinear systems on the basis of uncertainty and disturbance estimator. The nonlinear systems can be with parametric uncertainties as well as unmatched uncertainties and external disturbances. The novel adaptive sliding mode control has several advantages over traditional sliding mode control method. Firstly, discontinuous sign function does not exist in the proposed adaptive sliding mode controller, and it is not replaced by saturation function or similar approximation functions as well. Therefore, chattering is avoided in essence, and the chattering avoidance is not at the cost of reducing the robustness of the closed-loop systems. Secondly, the uncertainties do not need to satisfy matching condition and the bounds of uncertainties are not required to be unknown. Thirdly, it is proved that the closed-loop systems have robustness to parameter uncertainties as well as unmatched model uncertainties and external disturbances. The robust stability is analyzed from a second-order linear time invariant system to a nonlinear system gradually. Simulation on a pendulum system with motor dynamics verifies the effectiveness of the proposed method.

  13. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  14. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    the results of uncertainty analysis to predict the uncertainties in process design. For parameter estimation, large data-sets of experimentally measured property values for a wide range of pure compounds are taken from the CAPEC database. Classical frequentist approach i.e., least square method is adopted...... parameter, octanol/water partition coefficient, aqueous solubility, acentric factor, and liquid molar volume at 298 K. The performance of property models for these properties with the revised set of model parameters is highlighted through a set of compounds not considered in the regression step...... sensitive properties for each unit operation are also identified. This analysis can be used to reduce the uncertainties in property estimates for the properties of critical importance (by performing additional experiments to get better experimental data and better model parameter values). Thus...

  15. Methods for estimating uncertainty in factor analytic solutions

    Directory of Open Access Journals (Sweden)

    P. Paatero

    2013-08-01

    Full Text Available EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS, displacement of factor elements (DISP, and bootstrap enhanced by displacement of factor elements (BS-DISP. The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.

  16. A bootstrap method for estimating uncertainty of water quality trends

    Science.gov (United States)

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  17. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  18. Preliminary results on uncertainties in rainfall interception estimation

    Energy Technology Data Exchange (ETDEWEB)

    Muzylo, A.; Llorens, P.; Domingo, F.; Valente, Fe.; Beven, K.; Gallart, F.

    2009-07-01

    This work deals with some aspects of rainfall interception estimation uncertainty in a deciduous forest. The importance of interception loss measurement error is stressed. Confidence limits of Rutter original and sparse interception model parameters obtained from regressions for leafed and leafless period are presented, as well as free throughfall coefficient variability with event weather conditions. (Author) 8 refs.

  19. Uncertainty of Areal Rainfall Estimation Using Point Measurements

    Science.gov (United States)

    McCarthy, D.; Dotto, C. B. S.; Sun, S.; Bertrand-Krajewski, J. L.; Deletic, A.

    2014-12-01

    The spatial variability of precipitation has a great influence on the quantity and quality of runoff water generated from hydrological processes. In practice, point rainfall measurements (e.g., rain gauges) are often used to represent areal rainfall in catchments. The spatial rainfall variability is difficult to be precisely captured even with many rain gauges. Thus the rainfall uncertainty due to spatial variability should be taken into account in order to provide reliable rainfall-driven process modelling results. This study investigates the uncertainty of areal rainfall estimation due to rainfall spatial variability if point measurements are applied. The areal rainfall is usually estimated as a weighted sum of data from available point measurements. The expected error of areal rainfall estimates is 0 if the estimation is an unbiased one. The variance of the error between the real and estimated areal rainfall is evaluated to indicate the uncertainty of areal rainfall estimates. This error variance can be expressed as a function of variograms, which was originally applied in geostatistics to characterize a spatial variable. The variogram can be evaluated using measurements from a dense rain gauge network. The areal rainfall errors are evaluated in two areas with distinct climate regimes and rainfall patterns: Greater Lyon area in France and Melbourne area in Australia. The variograms of the two areas are derived based on 6-minute rainfall time series data from 2010 to 2013 and are then used to estimate uncertainties of areal rainfall represented by different numbers of point measurements in synthetic catchments of various sizes. The error variance of areal rainfall using one point measurement in the centre of a 1-km2 catchment is 0.22 (mm/h)2 in Lyon. When the point measurement is placed at one corner of the same-size catchment, the error variance becomes 0.82 (mm/h)2 also in Lyon. Results for Melbourne were similar but presented larger uncertainty. Results

  20. DISTRIBUTED MONITORING SYSTEM RELIABILITY ESTIMATION WITH CONSIDERATION OF STATISTICAL UNCERTAINTY

    Institute of Scientific and Technical Information of China (English)

    Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan

    2005-01-01

    Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.

  1. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    Science.gov (United States)

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  2. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  3. Uncertainties in the Sunspot Numbers: Estimation and Implications

    CERN Document Server

    de Wit, Thierry Dudok; Clette, Frédéric

    2016-01-01

    Sunspot number series are subject to various uncertainties, which are still poorly known. The need for their better understanding was recently highlighted by the major makeover of the international Sunspot Number [Clette et al., Space Science Reviews, 2014]. We present the first thorough estimation of these uncertainties, which behave as Poisson-like random variables with a multiplicative coefficient that is time- and observatory-dependent. We provide a simple expression for these uncertainties, and reveal how their evolution in time coincides with changes in the observations, and processing of the data. Knowing their value is essential for properly building composites out of multiple observations, and for preserving the stability of the composites in time.

  4. On the Estimation of Random Uncertainties of Star Formation Histories

    CERN Document Server

    Dolphin, Andrew E

    2013-01-01

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effecti...

  5. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  6. Quantifying the uncertainties of chemical evolution studies. II. Stellar yields

    CERN Document Server

    Romano, D; Tosi, M; Matteucci, F

    2010-01-01

    This is the second paper of a series which aims at quantifying the uncertainties in chemical evolution model predictions related to the underlying model assumptions. Specifically, it deals with the uncertainties due to the choice of the stellar yields. We adopt a widely used model for the chemical evolution of the Galaxy and test the effects of changing the stellar nucleosynthesis prescriptions on the predicted evolution of several chemical species. We find that, except for a handful of elements whose nucleosynthesis in stars is well understood by now, large uncertainties still affect the model predictions. This is especially true for the majority of the iron-peak elements, but also for much more abundant species such as carbon and nitrogen. The main causes of the mismatch we find among the outputs of different models assuming different stellar yields and among model predictions and observations are: (i) the adopted location of the mass cut in models of type II supernova explosions; (ii) the adopted strength ...

  7. The unbearable uncertainty of Bayesian divergence time estimation

    Institute of Scientific and Technical Information of China (English)

    Mario DOS REIS; Ziheng YANG

    2013-01-01

    Divergence time estimation using molecular sequence data relying on uncertain fossil calibrations is an unconventional statistical estimation problem.As the sequence data provide information about the distances only,estimation of absolute times and rates has to rely on information in the prior,so that the model is only semiidentifiable.In this paper,we use a combination of mathematical analysis,computer simulation,and real data analysis to examine the uncertainty in posterior time estimates when the amount of sequence data increases.The analysis extends the infinite-sites theory of Yang and Rannala,which predicts the posterior distribution of divergence times and rate when the amount of data approaches infinity.We found that the posterior credibility interval in general decreases and reaches a non-zero limit when the data size increases.However,for the node with the most precise fossil calibration (as measured by the interval width divided by the mid value),sequence data do not really make the time estimate any more precise.We propose a finite-sites theory which predicts that the square of the posterior interval width approaches its infinite-data limit at the rate 1/n,where n is the sequence length.We suggest a procedure to partition the uncertainty of posterior time estimates into that due to uncertainties in fossil calibrations and that due to sampling errors in the sequence data.We evaluate the impact of conflicting fossil calibrations on posterior time estimation and point out that narrow credibility intervals or overly precise time estimates can be produced by conflicting or erroneous fossil calibrations.

  8. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    Science.gov (United States)

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  9. A novel workflow for seismic net pay estimation with uncertainty

    CERN Document Server

    Glinsky, Michael E; Unaldi, Muhlis; Nagassar, Vishal

    2016-01-01

    This paper presents a novel workflow for seismic net pay estimation with uncertainty. It is demonstrated on the Cassra/Iris Field. The theory for the stochastic wavelet derivation (which estimates the seismic noise level along with the wavelet, time-to-depth mapping, and their uncertainties), the stochastic sparse spike inversion, and the net pay estimation (using secant areas) along with its uncertainty; will be outlined. This includes benchmarking of this methodology on a synthetic model. A critical part of this process is the calibration of the secant areas. This is done in a two step process. First, a preliminary calibration is done with the stochastic reflection response modeling using rock physics relationships derived from the well logs. Second, a refinement is made to the calibration to account for the encountered net pay at the wells. Finally, a variogram structure is estimated from the extracted secant area map, then used to build in the lateral correlation to the ensemble of net pay maps while matc...

  10. Estimation uncertainty of direct monetary flood damage to buildings

    Directory of Open Access Journals (Sweden)

    B. Merz

    2004-01-01

    Full Text Available Traditional flood design methods are increasingly supplemented or replaced by risk-oriented methods which are based on comprehensive risk analyses. Besides meteorological, hydrological and hydraulic investigations such analyses require the estimation of flood impacts. Flood impact assessments mainly focus on direct economic losses using damage functions which relate property damage to damage-causing factors. Although the flood damage of a building is influenced by many factors, usually only inundation depth and building use are considered as damage-causing factors. In this paper a data set of approximately 4000 damage records is analysed. Each record represents the direct monetary damage to an inundated building. The data set covers nine flood events in Germany from 1978 to 1994. It is shown that the damage data follow a Lognormal distribution with a large variability, even when stratified according to the building use and to water depth categories. Absolute depth-damage functions which relate the total damage to the water depth are not very helpful in explaining the variability of the damage data, because damage is determined by various parameters besides the water depth. Because of this limitation it has to be expected that flood damage assessments are associated with large uncertainties. It is shown that the uncertainty of damage estimates depends on the number of flooded buildings and on the distribution of building use within the flooded area. The results are exemplified by a damage assessment for a rural area in southwest Germany, for which damage estimates and uncertainty bounds are quantified for a 100-year flood event. The estimates are compared to reported flood damages of a severe flood in 1993. Given the enormous uncertainty of flood damage estimates the refinement of flood damage data collection and modelling are major issues for further empirical and methodological improvements.

  11. Handling uncertainty in quantitative estimates in integrated resource planning

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E. [Oak Ridge National Lab., TN (United States); Wagner, C.G. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics

    1995-01-01

    This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.

  12. [Estimation of uncertainty of measurement in clinical biochemistry].

    Science.gov (United States)

    Enea, Maria; Hristodorescu, Cristina; Schiriac, Corina; Morariu, Dana; Mutiu, Tr; Dumitriu, Irina; Gurzu, B

    2009-01-01

    The uncertainty of measurement (UM) or measurement uncertainty is known as the parameter associated with the result of a measurement. Repeated measurements usually reveal slightly different results for the same analyte, sometimes a little higher, sometimes a little lower, because the results of a measurement are depending not only by the analyte itself, but also, by a number of error factors that could give doubts about the estimate. The uncertainty of the measurement represent the quantitative, mathematically expression of this doubt. UM is a range of measured values which is probably to enclose the true value of the measured. Calculation of UM for all types of laboratories is regularized by the ISO Guide to the Expression of Uncertainty in Measurement (abbreviated GUM) and the SR ENV 13005 : 2003 (both recognized by European Accreditation). Even if the GUM rules about UM estimation are very strictly, the offering of the result together with UM will increase the confidence of customers (patients or physicians). In this study the authors are presenting the possibilities of UM assessing in labs from our country by using the data obtained in the procedures of methods validation, during the internal and external quality control.

  13. Estimating abundance in the presence of species uncertainty

    Science.gov (United States)

    Chambert, Thierry A; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.

    2016-01-01

    1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.

  14. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  15. Estimating Storm Discharge and Water Quality Data Uncertainty: A Software Tool for Monitoring and Modeling Applications

    Science.gov (United States)

    Uncertainty inherent in hydrologic and water quality data has numerous economic, societal, and environmental implications; therefore, scientists can no longer ignore measurement uncertainty when collecting and presenting these data. Reporting uncertainty estimates with measured hydrologic and water...

  16. Magnitude Uncertainties Impact Seismic Rate Estimates, Forecasts and Predictability Experiments

    CERN Document Server

    Werner, M J

    2007-01-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) aims to prospectively test time-dependent earthquake probability forecasts on their consistency with observations. To compete, time-dependent seismicity models are calibrated on earthquake catalog data. But catalogs contain much observational uncertainty. We study the impact of magnitude uncertainties on rate estimates in clustering models, on their forecasts and on their evaluation by CSEP's consistency tests. First, we quantify magnitude uncertainties. We find that magnitude uncertainty is more heavy-tailed than a Gaussian, such as a double-sided exponential distribution, with scale parameter nu_c=0.1 - 0.3. Second, we study the impact of such noise on the forecasts of a simple clustering model which captures the main ingredients of popular short term models. We prove that the deviations of noisy forecasts from an exact forecast are power law distributed in the tail with exponent alpha=1/(a*nu_c), where a is the exponent of the productivity...

  17. Accounting for haplotype phase uncertainty in linkage disequilibrium estimation.

    Science.gov (United States)

    Kulle, B; Frigessi, A; Edvardsen, H; Kristensen, V; Wojnowski, L

    2008-02-01

    The characterization of linkage disequilibrium (LD) is applied in a variety of studies including the identification of molecular determinants of the local recombination rate, the migration and population history of populations, and the role of positive selection in adaptation. LD suffers from the phase uncertainty of the haplotypes used in its calculation, which reflects limitations of the algorithms used for haplotype estimation. We introduce a LD calculation method, which deals with phase uncertainty by weighting all possible haplotype pairs according to their estimated probabilities as evaluated by PHASE. In contrast to the expectation-maximization (EM) algorithm as implemented in the HAPLOVIEW and GENETICS packages, our method considers haplotypes based on the entire genetic information available for the candidate region. We tested the method using simulated and real genotyping data. The results show that, for all practical purposes, the new method is advantageous in comparison with algorithms that calculate LD using only the most probable haplotype or bilocus haplotypes based on the EM algorithm. The new method deals especially well with low LD regions, which contribute strongly to phase uncertainty. Altogether, the method is an attractive alternative to standard LD calculation procedures, including those based on the EM algorithm. We implemented the method in the software suite R, together with an interface to the popular haplotype calculation package PHASE.

  18. A review and new insights on the estimation of the b-valueand its uncertainty

    Directory of Open Access Journals (Sweden)

    L. Sandri

    2003-06-01

    Full Text Available The estimation of the b-value of the Gutenberg-Richter Law and its uncertainty is crucial in seismic hazard studies, as well as in verifying theoretical assertions, such as, for example, the universality of the Gutenberg-Richter Law. In spite of the importance of this issue, many scientific papers still adopt formulas that lead to different estimations. The aim of this paper is to review the main concepts relative to the estimation of the b-value and its uncertainty, and to provide some new analytical and numerical insights on the biases introduced by the unavoidable use of binned magnitudes, and by the measurement errors on the magnitude. It is remarked that, although corrections for binned magnitudes were suggested in the past, they are still very often neglected in the estimation of the b-value, implicitly by assuming that the magnitude is a continuous random variable. In particular, we show that: i the assumption of continuous magnitude can lead to strong bias in the b-value estimation, and to a significant underestimation of its uncertainty, also for binning of ?M = 0.1; ii a simple correction applied to the continuous formula causes a drastic reduction of both biases; iii very simple formulas, until now mostly ignored, provide estimations without significant biases; iv the effect on the bias due to the measurement errors is negligible compared to the use of binned magnitudes.

  19. Modeling the uncertainty of estimating forest carbon stocks in China

    Directory of Open Access Journals (Sweden)

    T. X. Yue

    2015-12-01

    Full Text Available Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ, the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA. The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.

  20. Analysing the uncertainty of estimating forest carbon stocks in China

    Science.gov (United States)

    Yue, Tian Xiang; Wang, Yi Fu; Du, Zheng Ping; Zhao, Ming Wei; Li Zhang, Li; Zhao, Na; Lu, Ming; Larocque, Guy R.; Wilson, John P.

    2016-07-01

    Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks in sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations. The validation results indicated that the data fusion approach reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high-accuracy surface modelling to fuse the ground sample plots with the satellite observations (HASM-S). The estimates produced with HASM-S were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-S method.

  1. Signal inference with unknown response: calibration uncertainty renormalized estimator

    CERN Document Server

    Dorn, Sebastian; Greiner, Maksim; Selig, Marco; Böhm, Vanessa

    2014-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of CURE is starting with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify CURE by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo sampling. We conclude that the...

  2. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  3. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    Science.gov (United States)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  4. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  5. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  6. Uncertainty estimation in diffusion MRI using the nonlocal bootstrap.

    Science.gov (United States)

    Yap, Pew-Thian; An, Hongyu; Chen, Yasheng; Shen, Dinggang

    2014-08-01

    In this paper, we propose a new bootstrap scheme, called the nonlocal bootstrap (NLB) for uncertainty estimation. In contrast to the residual bootstrap, which relies on a data model, or the repetition bootstrap, which requires repeated signal measurements, NLB is not restricted by the data structure imposed by a data model and obviates the need for time-consuming multiple acquisitions. NLB hinges on the observation that local imaging information recurs in an image. This self-similarity implies that imaging information coming from spatially distant (nonlocal) regions can be exploited for more effective estimation of statistics of interest. Evaluations using in silico data indicate that NLB produces distribution estimates that are in closer agreement with those generated using Monte Carlo simulations, compared with the conventional residual bootstrap. Evaluations using in vivo data demonstrate that NLB produces results that are in agreement with our knowledge on white matter architecture.

  7. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....

  8. Linear minimax estimation for random vectors with parametric uncertainty

    KAUST Repository

    Bitar, E

    2010-06-01

    In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).

  9. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  10. Sensitivity of Process Design due to Uncertainties in Property Estimates

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent;

    2012-01-01

    The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels...... of accuracy for different thermo-physical property prediction models; and c) design variables versus properties relationships. The application of the methodology is illustrated through a case study of an extractive distillation process and sensitivity analysis of designs of various unit operations found...

  11. Estimation of Model Uncertainties in Closed-loop Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    is a measure for the variation in the system seen through the feedback controller. It is shown that it is possible to isolate a certain number of parameters or uncertain blocks in the system exactly. This is obtained by modifying the feedback controller through the YJBK transfer function together with pre......This paper describe a method for estimation of parameters or uncertainties in closed-loop systems. The method is based on an application of the dual YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all systems stabilized by a given controller. The dual YJBK transfer function...

  12. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  13. The Asymptotic Standard Errors of Some Estimates of Uncertainty in the Two-Way Contingency Table

    Science.gov (United States)

    Brown, Morton B.

    1975-01-01

    Estimates of conditional uncertainty, contingent uncertainty, and normed modifications of contingent uncertainity have been proposed for the two-way contingency table. The asymptotic standard errors of the estimates are derived. (Author)

  14. ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES

    Energy Technology Data Exchange (ETDEWEB)

    Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ, 85734 (United States)

    2013-09-20

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.

  15. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Science.gov (United States)

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    cross-check for this purpose, (II) integrated satellite laser-ranging validation of the estimated systematic uncertainty bounds, (III) expanded the Bernese 5.2 software for propagating random uncertainties from the GPS orbit data and LEO navigation tracking data input to the LEO data output. Preliminary excess phase results including propagated uncertainty estimates will also be shown. Except for disturbed space weather conditions, we expect a robust performance at millimeter level for the derived excess phases, which after large-scale processing of the RO data of many years can provide a new SI-traced fundamental climate data record.

  16. Using predictive distributions to estimate uncertainty in classifying landmine targets

    Science.gov (United States)

    Close, Ryan; Watford, Ken; Glenn, Taylor; Gader, Paul; Wilson, Joseph

    2011-06-01

    Typical classification models used for detection of buried landmines estimate a singular discriminative output. This classification is based on a model or technique trained with a given set of training data available during system development. Regardless of how well the technique performs when classifying objects that are 'similar' to the training set, most models produce undesirable (and many times unpredictable) responses when presented with object classes different from the training data. This can cause mines or other explosive objects to be misclassified as clutter, or false alarms. Bayesian regression and classification models produce distributions as output, called the predictive distribution. This paper will discuss predictive distributions and their application to characterizing uncertainty in the classification decision, from the context of landmine detection. Specifically, experiments comparing the predictive variance produced by relevance vector machines and Gaussian processes will be described. We demonstrate that predictive variance can be used to determine the uncertainty of the model in classifying an object (i.e., the classifier will know when it's unable to reliably classify an object). The experimental results suggest that degenerate covariance models (such as the relevance vector machine) are not reliable in estimating the predictive variance. This necessitates the use of the Gaussian Process in creating the predictive distribution.

  17. Propagating Uncertainties from Source Model Estimations to Coulomb Stress Changes

    Science.gov (United States)

    Baumann, C.; Jonsson, S.; Woessner, J.

    2009-12-01

    Multiple studies have shown that static stress changes due to permanent fault displacement trigger earthquakes on the causative and on nearby faults. Calculations of static stress changes in previous studies have been based on fault parameters without considering any source model uncertainties or with crude assumptions about fault model errors based on available different source models. In this study, we investigate the influence of fault model parameter uncertainties on Coulomb Failure Stress change (ΔCFS) calculations by propagating the uncertainties from the fault estimation process to the Coulomb Failure stress changes. We use 2500 sets of correlated model parameters determined for the June 2000 Mw = 5.8 Kleifarvatn earthquake, southwest Iceland, which were estimated by using a repeated optimization procedure and multiple data sets that had been modified by synthetic noise. The model parameters show that the event was predominantly a right-lateral strike-slip earthquake on a north-south striking fault. The variability of the sets of models represents the posterior probability density distribution for the Kleifarvatn source model. First we investigate the influence of individual source model parameters on the ΔCFS calculations. We show through a correlation analysis that for this event, changes in dip, east location, strike, width and in part north location have stronger impact on the Coulomb failure stress changes than changes in fault length, depth, dip-slip and strike-slip. Second we find that the accuracy of Coulomb failure stress changes appears to increase with increasing distance from the fault. The absolute value of the standard deviation decays rapidly with distance within about 5-6 km around the fault from about 3-3.5 MPa down to a few Pa, implying that the influence of parameter changes decrease with increasing distance. This is underlined by the coefficient of variation CV, defined as the ratio of the standard deviation of the Coulomb stress

  18. Estimation of measurement uncertainty caused by surface gradient for a white light interferometer.

    Science.gov (United States)

    Liu, Mingyu; Cheung, Chi Fai; Ren, Mingjun; Cheng, Ching-Hsiang

    2015-10-10

    Although the scanning white light interferometer can provide measurement results with subnanometer resolution, the measurement accuracy is far from perfect. The surface roughness and surface gradient have significant influence on the measurement uncertainty since the corresponding height differences within a single CCD pixel cannot be resolved. This paper presents an uncertainty estimation method for estimating the measurement uncertainty due to the surface gradient of the workpiece. The method is developed based on the mathematical expression of an uncertainty estimation model which is derived and verified through a series of experiments. The results show that there is a notable similarity between the predicted uncertainty from the uncertainty estimation model and the experimental measurement uncertainty, which demonstrates the effectiveness of the method. With the establishment of the proposed uncertainty estimation method, the uncertainty associated with the measurement result can be determined conveniently.

  19. Costs of sea dikes - regressions and uncertainty estimates

    Science.gov (United States)

    Lenk, Stephan; Rybski, Diego; Heidrich, Oliver; Dawson, Richard J.; Kropp, Jürgen P.

    2017-05-01

    Failure to consider the costs of adaptation strategies can be seen by decision makers as a barrier to implementing coastal protection measures. In order to validate adaptation strategies to sea-level rise in the form of coastal protection, a consistent and repeatable assessment of the costs is necessary. This paper significantly extends current knowledge on cost estimates by developing - and implementing using real coastal dike data - probabilistic functions of dike costs. Data from Canada and the Netherlands are analysed and related to published studies from the US, UK, and Vietnam in order to provide a reproducible estimate of typical sea dike costs and their uncertainty. We plot the costs divided by dike length as a function of height and test four different regression models. Our analysis shows that a linear function without intercept is sufficient to model the costs, i.e. fixed costs and higher-order contributions such as that due to the volume of core fill material are less significant. We also characterise the spread around the regression models which represents an uncertainty stemming from factors beyond dike length and height. Drawing an analogy with project cost overruns, we employ log-normal distributions and calculate that the range between 3x and x/3 contains 95 % of the data, where x represents the corresponding regression value. We compare our estimates with previously published unit costs for other countries. We note that the unit costs depend not only on the country and land use (urban/non-urban) of the sites where the dikes are being constructed but also on characteristics included in the costs, e.g. property acquisition, utility relocation, and project management. This paper gives decision makers an order of magnitude on the protection costs, which can help to remove potential barriers to developing adaptation strategies. Although the focus of this research is sea dikes, our approach is applicable and transferable to other adaptation measures.

  20. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-10-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on

  1. Uncertainties of the KIKO3D-ATHLET calculations using the Kalinin-3 benchmark (Phase II) data

    Energy Technology Data Exchange (ETDEWEB)

    Panka, Istvan; Hegyi, Gyoergy; Maraczy, Csaba; Kereszturi, Andras [Hungarian Academy of Sciences, Centre for Energy Research, Budapest (Hungary). Reactor Analysis Dept.

    2016-09-15

    The best estimate simulation of three-dimensional phenomena in nuclear reactor cores requires the use of coupled neutron physics and thermal-hydraulics calculations. However these analyses should be supplemented by the survey of the corresponding uncertainties. In this paper the uncertainties of the coupled KIKO3D-ATHLET calculations are presented for a VVER-1000 type core using the OECD NEA Kalinin-3 (Phase II) benchmark data, although only the neutronic uncertainties are considered and further simplifications are applied and discussed. Additionally, this study has been performed in the conjunction with the OECD NEA UAM benchmark, as well. In the first part of the paper, the uncertainties of the effective multiplication factor, the assembly-wise radial power distribution, the axial power distribution, the rod worth, etc. are presented at steady-state. After that some uncertainties of the transient calculations are discussed for the considered switch-off of one Main Circulation Pump (MCP) type transient.

  2. Uncertainty Estimation in SiGe HBT Small-Signal Modeling

    DEFF Research Database (Denmark)

    Masood, Syed M.; Johansen, Tom Keinicke; Vidkjær, Jens;

    2005-01-01

    An uncertainty estimation and sensitivity analysis is performed on multi-step de-embedding for SiGe HBT small-signal modeling. The uncertainty estimation in combination with uncertainty model for deviation in measured S-parameters, quantifies the possible error value in de-embedded two...

  3. Exploring uncertainties in probabilistic seismic hazard estimates for Quito

    Science.gov (United States)

    Beauval, Celine; Yepes, Hugo; Audin, Laurence; Alvarado, Alexandra; Nocquet, Jean-Mathieu

    2016-04-01

    In the present study, probabilistic seismic hazard estimates at 475 years return period for Quito, capital city of Ecuador, show that the crustal host zone is the only source zone that determines the city's hazard levels for such return period. Therefore, the emphasis is put on identifying the uncertainties characterizing the host zone, i.e. uncertainties in the recurrence of earthquakes expected in the zone and uncertainties on the ground motions that these earthquakes may produce. As the number of local strong-ground motions is still scant, ground-motion prediction equations are imported from other regions. Exploring recurrence models for the host zone based on different observations and assumptions, and including three GMPE candidates (Akkar and Bommer 2010, Zhao et al. 2006, Boore and Atkinson 2008), we obtain a significant variability on the estimated acceleration at 475 years (site coordinates: -78.51 in longitude and -0.2 in latitude, VS30 760 m/s): 1) Considering historical earthquake catalogs, and relying on frequency-magnitude distributions where rates for magnitudes 6-7 are extrapolated from statistics of magnitudes 4.5-6.0 mostly in the 20th century, the acceleration at the PGA varies between 0.28g and 0.55g with a mean value around 0.4g. The results show that both the uncertainties in the GMPE choice and in the seismicity model are responsible for this variability. 2) Considering slip rates inferred form geodetic measurements across the Quito fault system, and assuming that most of the deformation occurs seismically (conservative hypothesis), leads to a much greater range of accelerations, 0.43 to 0.73g for the PGA (with a mean of 0.55g). 3) Considering slip rates inferred from geodetic measurements, and assuming that 50% only of the deformation is released in earthquakes (partially locked fault, model based on 15 years of GPS data), leads to a range of accelerations 0.32g to 0.58g for the PGA, with a mean of 0.42g. These accelerations are in agreement

  4. Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids.

    Science.gov (United States)

    Blischak, Paul D; Kubatko, Laura S; Wolfe, Andrea D

    2016-05-01

    Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high-throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty-ADU), which complicates the calculation of important quantities such as allele frequencies. Here, we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high-throughput sequencing data in the form of read counts. We bridge the gap from data collection (using restriction enzyme based techniques [e.g. GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package polyfreqs and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity.

  5. Bayesian Assessment of the Uncertainties of Estimates of a Conceptual Rainfall-Runoff Model Parameters

    Science.gov (United States)

    Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.

    2014-12-01

    This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.

  6. Quantitative estimation of sampling uncertainties for mycotoxins in cereal shipments.

    Science.gov (United States)

    Bourgeois, F S; Lyman, G J

    2012-01-01

    the sampling variance. GMOs and mycotoxins appear to have a highly heterogeneous distribution in a cargo depending on how the ship was loaded (the grain may have come from more than one terminal and set of storage silos) and mycotoxin growth may have occurred in transit. This paper examines a statistical model based on random contamination that can be used to calculate the sampling uncertainty arising from primary sampling of a cargo; it deals with what is thought to be a worst-case scenario. The determination of the sampling variance is treated both analytically and by Monte Carlo simulation. The latter approach provides the entire sampling distribution and not just the sampling variance. The sampling procedure is based on rules provided by the Canadian Grain Commission (CGC) and the levels of contamination considered are those relating to allowable levels of ochratoxin A (OTA) in wheat. The results of the calculations indicate that at a loading rate of 1000 tonnes h(-1), primary sample increment masses of 10.6 kg, a 2000-tonne lot and a primary composite sample mass of 1900 kg, the relative standard deviation (RSD) is about 1.05 (105%) and the distribution of the mycotoxin (MT) level in the primary composite samples is highly skewed. This result applies to a mean MT level of 2 ng g(-1). The rate of false-negative results under these conditions is estimated to be 16.2%. The corresponding contamination is based on initial average concentrations of MT of 4000 ng g(-1) within average spherical volumes of 0.3 m diameter, which are then diluted by a factor of 2 each time they pass through a handling stage; four stages of handling are assumed. The Monte Carlo calculations allow for variation in the initial volume of the MT-bearing grain, the average concentration and the dilution factor. The Monte Carlo studies seek to show the effect of variation in the sampling frequency while maintaining a primary composite sample mass of 1900 kg. The overall results are presented in

  7. On the Uncertainties of Stellar Mass Estimates via Colour Measurements

    CERN Document Server

    Roediger, Joel C

    2015-01-01

    Mass-to-light versus colour relations (MLCRs), derived from stellar population synthesis models, are widely used to estimate galaxy stellar masses (M$_*$) yet a detailed investigation of their inherent biases and limitations is still lacking. We quantify several potential sources of uncertainty, using optical and near-infrared (NIR) photometry for a representative sample of nearby galaxies from the Virgo cluster. Our method for combining multi-band photometry with MLCRs yields robust stellar masses, while errors in M$_*$ decrease as more bands are simultaneously considered. The prior assumptions in one's stellar population modelling dominate the error budget, creating a colour-dependent bias of up to 0.6 dex if NIR fluxes are used (0.3 dex otherwise). This matches the systematic errors associated with the method of spectral energy distribution (SED) fitting, indicating that MLCRs do not suffer from much additional bias. Moreover, MLCRs and SED fitting yield similar degrees of random error ($\\sim$0.1-0.14 dex)...

  8. Reducing the uncertainty in wind speed estimations near the coast

    Science.gov (United States)

    Floors, Rogier; Hahmann, Andrea N.; Karagali, Ioanna; Vasiljevic, Nikola; Lea, Guillaume; Simon, Elliot; Courtney, Michael; Ahsbahs, Tobias; Bay Hasager, Charlotte; Badger, Merete; Peña, Alfredo

    2016-04-01

    Many countries plan to meet renewable energy targets by installing near-shore wind farms, because of the high offshore wind speeds and good grid connectivity. Because of the strong relation between mean wind speed and the annual energy production, there is an interest in reducing uncertainty of the estimation of the wind speed in these coastal areas. The RUNE project aims to provide recommendations on the use of lidar systems and mesoscale models results to find the most effective (cost vs. accuracy) solution of estimating near-shore wind resources. Here we show some first results of the RUNE measuring campaign at the west coast of Jutland that started in December 2015. In this campaign, a long-range WindScanner system (a multi-lidar instrumentation) was used simultaneously with measurements from several vertical profiling lidars, a meteorological mast and an offshore buoy. These measurements result in a detailed picture of the flow in a transect across the coastline from approximately 5 km offshore up to 3 km inland. The wind speed obtained from a lidar in a sector-scanning mode and from two time-synchronized lidars that were separated horizontally but focused in the same point, will be compared. Furthermore it will be shown how the resulting horizontal wind speed transects compare with the wind speed measurements from the vertical profiling lidars and the meteorological mast. The behaviour of the coastal gradient in wind speed in this area is discussed. Satellite data for the wind over the RUNE measurement area were also collected. Synthetic Aperture Radar (SAR) winds from Sentinel-1 and TerraSAR-X were retrieved at different spatial resolutions. Advanced Scatterometer (ASCAT) swath winds were obtained from both METOP-A and B platforms. These were used for direct comparisons with the lidar in sector scanning mode.

  9. Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...

  10. Residual uncertainty estimation using instance-based learning with applications to hydrologic forecasting

    Science.gov (United States)

    Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.

    2017-08-01

    A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.

  11. Barotropic Mechanisms of Derivative-based Uncertainty Propagation in Drake Passage Transport Estimation

    Science.gov (United States)

    Kalmikov, A.; Heimbach, P.

    2013-12-01

    We apply derivative-based uncertainty quantification (UQ) and sensitivity methods to the estimation of Drake Passage transport in a global barotropic configuration of the MIT ocean general circulation model (MITgcm). Sensitivity and uncertainty fields are evaluated via first and second derivative codes of the MITgcm, generated via algorithmic differentiation (AD). Observation uncertainties are projected to uncertainties in the control variables by inversion of the Hessian of the nonlinear least-squares misfit function. Only data-supported components of Hessian information are retained through elimination of the unconstrained uncertainty nullspace. The assimilated observation uncertainty is combined with prior control variable uncertainties to reduce their posterior uncertainty. The spatial patterns of posterior uncertainty reduction and their temporal evolution are explained in terms of barotropic dynamics. Global uncertainty teleconnection mechanisms are identified as barotropic uncertainty waves. Uncertainty coupling across different control fields is demonstrated by assimilation of sea surface height uncertainty. A second step in our UQ scheme consists in propagating prior and posterior uncertainties of the model controls onto model output variables of interest, here Drake Passage transport. Forward uncertainty propagation amounts to matrix transformation of the uncertainty covariances via the model Jacobian and its adjoint. Sources of uncertainties of the transport are revealed through analysis of the adjoint wave dynamics in the model. These adjoint (reversed) mechanisms are associated with the evolution of sensitivity fields and our method formally extends sensitivity analysis to uncertainty quantification. Inverse uncertainty propagation mechanisms can be linked to adjoint dynamics in a similar manner. The posterior correlations of controls are found to dominate the reduction of the transport uncertainty compared to the marginal uncertainty reduction of the

  12. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  13. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  14. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-04-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and computationally efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis in urban hydrology. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described by means of a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias (due to input and structural errors, and measurement errors. In our case study, the best performing bias description was the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to directly

  15. Evaluation of uncertainty in field soil moisture estimations by cosmic-ray neutron sensing

    Science.gov (United States)

    Scheiffele, Lena Maria; Baroni, Gabriele; Schrön, Martin; Ingwersen, Joachim; Oswald, Sascha E.

    2017-04-01

    Cosmic-ray neutron sensing (CRNS) has developed into a valuable, indirect and non-invasive method to estimate soil moisture at the field scale. It provides continuous temporal data (hours to days), relatively large depth (10-70 cm), and intermediate spatial scale measurements (hundreds of meters), thereby overcoming some of the limitations in point measurements (e.g., TDR/FDR) and of remote sensing products. All these characteristics make CRNS a favorable approach for soil moisture estimation, especially for applications in cropped fields and agricultural water management. Various studies compare CRNS measurements to soil sensor networks and show a good agreement. However, CRNS is sensitive to more characteristics of the land-surface, e.g. additional hydrogen pools, soil bulk density, and biomass. Prior to calibration the standard atmospheric corrections are accounting for the effects of air pressure, humidity and variations in incoming neutrons. In addition, the standard calibration approach was further extended to account for hydrogen in lattice water and soil organic material. Some corrections were also proposed to account for water in biomass. Moreover, the sensitivity of the probe was found to decrease with distance and a weighting procedure for the calibration datasets was introduced to account for the sensors' radial sensitivity. On the one hand, all the mentioned corrections showed to improve the accuracy in estimated soil moisture values. On the other hand, they require substantial additional efforts in monitoring activities and they could inherently contribute to the overall uncertainty of the CRNS product. In this study we aim (i) to quantify the uncertainty in the field soil moisture estimated by CRNS and (ii) to understand the role of the different sources of uncertainty. To this end, two experimental sites in Germany were equipped with a CRNS probe and compared to values of a soil moisture network. The agricultural fields were cropped with winter

  16. Estimation of measurement uncertainties in X-ray computed tomography metrology using the substitution method

    DEFF Research Database (Denmark)

    Müller, Pavel; Hiller, Jochen; Dai, Y.

    2014-01-01

    This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...

  17. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  18. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  19. A Method to Estimate Uncertainty in Radiometric Measurement Using the Guide to the Expression of Uncertainty in Measurement (GUM) Method; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.

    2015-03-01

    Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).

  20. Uncertainty related to Environmental Data and Estimated Extreme Events

    DEFF Research Database (Denmark)

    Burcharth, H. F.

    The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertaintie...

  1. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  2. Uncertainty estimation of the mass discharge from a contaminated site using a fully Bayesian framework

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, W.; Binning, Philip John

    2010-01-01

    for quantifying the uncertainty in the mass discharge across a multilevel control plane. The method is based on geostatistical inverse modelling and accounts for i) conceptual model uncertainty through multiple conceptual models and Bayesian model averaging, ii) heterogeneity through Bayesian geostatistics...... with an uncertain geostatistical model and iii) measurement uncertainty. The method is tested on a TCE contaminated site for which four different conceptual models were set up. The mass discharge and the associated uncertainty are hereby determined. It is discussed which of the conceptual models is most likely...

  3. Uncertainties in Steric Sea Level Change Estimation During the Satellite Altimeter Era: Concepts and Practices

    Science.gov (United States)

    MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.

    2016-10-01

    This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.

  4. Uncertainties in Steric Sea Level Change Estimation During the Satellite Altimeter Era: Concepts and Practices

    Science.gov (United States)

    MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.

    2017-01-01

    This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.

  5. Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling.

    Science.gov (United States)

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2007-12-01

    Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate

  6. Uncertainties in BC Estimations: the Role of Atmospheric Processes

    Science.gov (United States)

    Vignati, E.; Kloster, S.; Koch, D.; Bauer, S. E.; Dentener, F.; Bond, T.; Sun, H.

    2006-12-01

    Modelling physical and chemical processes involving aerosol particles remains a large source of uncertainties. To characterize the range of uncertainty in these processes on atmospheric BC concentrations, three global models (CTM-TM5, GCM ECHAM5-HAM and GISS GCM) were run using identical BC, particulate organic matter and SO2 emission inventories provided by IIASA for the year 2000. The first two models have the same aerosol dynamic module, while TM5 is also run with a bulk aerosol scheme; the GISS model uses both bulk aerosol and a method of moments aerosol microphysical schemes. We can thus specifically assess the differences in predicted BC concentrations from using the bulk approach and the two microphysical schemes. By comparing the modeled concentrations with an extensive data set of observations, distinguished by measurement methodology, season and region, we will critically evaluate the benefit of using microphysical schemes to simulate the atmospheric BC cycle.

  7. Comparison of different methods to estimate the uncertainty in composition measurement by chromatography.

    Science.gov (United States)

    Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo

    2015-06-01

    Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method.

  8. Seismic moment tensors and estimated uncertainties in southern Alaska

    Science.gov (United States)

    Silwal, Vipul; Tape, Carl

    2016-04-01

    We present a moment tensor catalog of 106 earthquakes in southern Alaska, and we perform a conceptually based uncertainty analysis for 21 of them. For each earthquake, we use both body waves and surface waves to do a grid search over double couple moment tensors and source depths in order to find the minimum of the misfit function. Our uncertainty parameter or, rather, our confidence parameter is the average value of the curve 𝒫 (V), where 𝒫 (V) is the posterior probability as a function of the fractional volume V of moment tensor space surrounding the minimum misfit moment tensor. As a supplemental means for characterizing and visualizing uncertainties, we generate moment tensor samples of the posterior probability. We perform a series of inversion tests to quantify the impact of certain decisions made within moment tensor inversions and to make comparisons with existing catalogs. For example, using an L1 norm in the misfit function provides more reliable solutions than an L2 norm, especially in cases when all available waveforms are used. Using body waves in addition to surface waves, as well as using more stations, leads to the most accurate moment tensor solutions.

  9. Comparing the effects of climate and impact model uncertainty on climate impacts estimates for grain maize

    Science.gov (United States)

    Holzkämper, Annelie; Honti, Mark; Fuhrer, Jürg

    2015-04-01

    Crop models are commonly applied to estimate impacts of projected climate change and to anticipate suitable adaptation measures. Thereby, uncertainties from global climate models, regional climate models, and impacts models cascade down to impact estimates. It is essential to quantify and understand uncertainties in impact assessments in order to provide informed guidance for decision making in adaptation planning. A question that has hardly been investigated in this context is how sensitive climate impact estimates are to the choice of the impact model approach. In a case study for Switzerland we compare results of three different crop modelling approaches to assess the relevance of impact model choice in relation to other uncertainty sources. The three approaches include an expert-based, a statistical and a process-based model. With each approach impact model parameter uncertainty and climate model uncertainty (originating from climate model chain and downscaling approach) are accounted for. ANOVA-based uncertainty partitioning is performed to quantify the relative importance of different uncertainty sources. Results suggest that uncertainty in estimated yield changes originating from the choice of the crop modelling approach can be greater than uncertainty from climate model chains. The uncertainty originating from crop model parameterization is small in comparison. While estimates of yield changes are highly uncertain, the directions of estimated changes in climatic limitations are largely consistent. This leads us to the conclusion that by focusing on estimated changes in climate limitations, more meaningful information can be provided to support decision making in adaptation planning - especially in cases where yield changes are highly uncertain.

  10. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    Science.gov (United States)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss

  11. Random Forests (RFs) for Estimation, Uncertainty Prediction and Interpretation of Monthly Solar Potential

    Science.gov (United States)

    Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis

    2017-04-01

    Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all

  12. A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    2001-08-24

    This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.

  13. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian-MCMC method

    Institute of Scientific and Technical Information of China (English)

    Sheng Zheng

    2013-01-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem.This paper deals with the RFC problem in a Bayesian framework.It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique,which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework.In contrast to the global optimization algorithm,the Bayesian-MCMC can obtain not only the approximate solutions,but also the probability distributions of the solutions,that is,uncertainty analyses of solutions.The Bayesian-MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar seaclutter data.Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter.The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  14. Estimation of Uncertainty in Risk Assessment of Hydrogen Applications

    DEFF Research Database (Denmark)

    Markert, Frank; Krymsky, V.; Kozine, Igor

    2011-01-01

    the permitting authorities request qualitative and quantitative risk assessments (QRA) to show the safety and acceptability in terms of failure frequencies and respective consequences. For new technologies not all statistical data might be established or are available in good quality causing assumptions......Hydrogen technologies such as hydrogen fuelled vehicles and refuelling stations are being tested in practice in a number of projects (e.g. HyFleet-Cute and Whistler project) giving valuable information on the reliability and maintenance requirements. In order to establish refuelling stations...... probability and the NUSAP concept to quantify uncertainties of new not fully qualified hydrogen technologies and implications to risk management....

  15. Using interpolation to estimate system uncertainty in gene expression experiments.

    Directory of Open Access Journals (Sweden)

    Lee J Falin

    Full Text Available The widespread use of high-throughput experimental assays designed to measure the entire complement of a cell's genes or gene products has led to vast stores of data that are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured intervals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within unmeasured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative (i.e. non-categorical treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements.

  16. Uncertainty Estimation of Global Precipitation Measurement through Objective Validation Strategy

    Science.gov (United States)

    KIM, H.; Utsumi, N.; Seto, S.; Oki, T.

    2014-12-01

    Since Tropical Rainfall Measuring Mission (TRMM) has been launched in 1997 as the first satellite mission dedicated to measuring precipitation, the spatiotemporal gaps of precipitation observation have been filled significantly. On February 27th, 2014, Dual-frequency Precipitation Radar (DPR) satellite has been launched as a core observatory of Global Precipitation Measurement (GPM), an international multi-satellite mission aiming to provide the global three hourly map of rainfall and snowfall. In addition to Ku-band, Ka-band radar is newly equipped, and their combination is expected to introduce higher precision than the precipitation measurement of TRMM/PR. In this study, the GPM level-2 orbit products are evaluated comparing to various precipitation observations which include TRMM/PR, in-situ data, and ground radar. In the preliminary validation over intercross orbits of DPR and TRMM, Ku-band measurements in both satellites shows very close spatial pattern and intensity, and the DPR is capable to capture broader range of precipitation intensity than of the TRMM. Furthermore, we suggest a validation strategy based on 'objective classification' of background atmospheric mechanisms. The Japanese 55-year Reanalysis (JRA-55) and auxiliary datasets (e.g., tropical cyclone best track) is used to objectively determine the types of precipitation. Uncertainty of abovementioned precipitation products is quantified as their relative differences and characterized for different precipitation mechanism. Also, it is discussed how the uncertainty affects the synthesis of TRMM and GPM for a long-term satellite precipitation observation records which is internally consistent.

  17. The importance of accounting for the uncertainty of published prognostic model estimates.

    Science.gov (United States)

    Young, Tracey A; Thompson, Simon

    2004-01-01

    Reported is the importance of properly reflecting uncertainty associated with prognostic model estimates when calculating the survival benefit of a treatment or technology, using liver transplantation as an example. Monte Carlo simulation techniques were used to account for the uncertainty of prognostic model estimates using the standard errors of the regression coefficients and their correlations. These methods were applied to patients with primary biliary cirrhosis undergoing liver transplantation using a prognostic model from a historic cohort who did not undergo transplantation. The survival gain over 4 years from transplantation was estimated. Ignoring the uncertainty in the prognostic model, the estimated survival benefit of liver transplantation was 16.7 months (95 percent confidence interval [CI], 13.5 to 20.1), and was statistically significant (p important that the precision of regression coefficients is available for users of published prognostic models. Ignoring this additional information substantially underestimates uncertainty, which can then impact misleadingly on policy decisions.

  18. Information Theory for Correlation Analysis and Estimation of Uncertainty Reduction in Maps and Models

    Directory of Open Access Journals (Sweden)

    J. Florian Wellmann

    2013-04-01

    Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.

  19. Quantifying Uncertainty for Early Life Cycle Cost Estimates

    Science.gov (United States)

    2013-04-01

    Motorola Six Sigma Master Black Belt . He delivers measurement courses in public and client offerings and provides measurement consulting to external...is a principal engineer at the Software Engineering Institute (SEI). He earned a BS in business and an MS in systems management and is a certified ...X’s in brackets indicate an inverse relationship between the BBN output factor and the corresponding COCOMO II driver. The black X’s indicate a

  20. Modeling uncertainties in estimation of canopy LAI from hyperspectral remote sensing data - A Bayesian approach

    Science.gov (United States)

    Varvia, Petri; Rautiainen, Miina; Seppänen, Aku

    2017-04-01

    Hyperspectral remote sensing data carry information on the leaf area index (LAI) of forests, and thus in principle, LAI can be estimated based on the data by inverting a forest reflectance model. However, LAI is usually not the only unknown in a reflectance model; especially, the leaf spectral albedo and understory reflectance are also not known. If the uncertainties of these parameters are not accounted for, the inversion of a forest reflectance model can lead to biased estimates for LAI. In this paper, we study the effects of reflectance model uncertainties on LAI estimates, and further, investigate whether the LAI estimates could recover from these uncertainties with the aid of Bayesian inference. In the proposed approach, the unknown leaf albedo and understory reflectance are estimated simultaneously with LAI from hyperspectral remote sensing data. The feasibility of the approach is tested with numerical simulation studies. The results show that in the presence of unknown parameters, the Bayesian LAI estimates which account for the model uncertainties outperform the conventional estimates that are based on biased model parameters. Moreover, the results demonstrate that the Bayesian inference can also provide feasible measures for the uncertainty of the estimated LAI.

  1. Variations of China's emission estimates: response to uncertainties in energy statistics

    Science.gov (United States)

    Hong, Chaopeng; Zhang, Qiang; He, Kebin; Guan, Dabo; Li, Meng; Liu, Fei; Zheng, Bo

    2017-01-01

    The accuracy of China's energy statistics is of great concern because it contributes greatly to the uncertainties in estimates of global emissions. This study attempts to improve the understanding of uncertainties in China's energy statistics and evaluate their impacts on China's emissions during the period of 1990-2013. We employed the Multi-resolution Emission Inventory for China (MEIC) model to calculate China's emissions based on different official data sets of energy statistics using the same emission factors. We found that the apparent uncertainties (maximum discrepancy) in China's energy consumption increased from 2004 to 2012, reaching a maximum of 646 Mtce (million tons of coal equivalent) in 2011 and that coal dominated these uncertainties. The discrepancies between the national and provincial energy statistics were reduced after the three economic censuses conducted during this period, and converging uncertainties were found in 2013. The emissions calculated from the provincial energy statistics are generally higher than those calculated from the national energy statistics, and the apparent uncertainty ratio (the ratio of the maximum discrepancy to the mean value) owing to energy uncertainties in 2012 took values of 30.0, 16.4, 7.7, 9.2 and 15.6 %, for SO2, NOx, VOC, PM2.5 and CO2 emissions, respectively. SO2 emissions are most sensitive to energy uncertainties because of the high contributions from industrial coal combustion. The calculated emission trends are also greatly affected by energy uncertainties - from 1996 to 2012, CO2 and NOx emissions, respectively, increased by 191 and 197 % according to the provincial energy statistics but by only 145 and 139 % as determined from the original national energy statistics. The energy-induced emission uncertainties for some species such as SO2 and NOx are comparable to total uncertainties of emissions as estimated by previous studies, indicating variations in energy consumption could be an important source of

  2. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  3. Uncertainties in field-line tracing in the magnetosphere. Part II: the complete internal geomagnetic field

    Directory of Open Access Journals (Sweden)

    K. S. C. Freeman

    Full Text Available The discussion in the preceding paper is restricted to the uncertainties in magnetic-field-line tracing in the magnetosphere resulting from published standard errors in the spherical harmonic coefficients that define the axisymmetric part of the internal geomagnetic field (i.e. gn0 ± δgn0. Numerical estimates of these uncertainties based on an analytic equation for axisymmetric field lines are in excellent agreement with independent computational estimates based on stepwise numerical integration along magnetic field lines. This comparison confirms the accuracy of the computer program used in the present paper to estimate the uncertainties in magnetic-field-line tracing that arise from published standard errors in the full set of spherical harmonic coefficients, which define the complete (non-axisymmetric internal geomagnetic field (i.e. gnm ± δgnm and hnm ± δhnm. An algorithm is formulated that greatly reduces the computing time required to estimate these uncertainties in magnetic-field-line tracing. The validity of this algorithm is checked numerically for both the axisymmetric part of the internal geomagnetic field in the general case (1 ≤ n ≤ 10 and the complete internal geomagnetic field in a restrictive case (0 ≤ m ≤ n, 1 ≤ n ≤ 3. On this basis it is assumed that the algorithm can be used with confidence in those cases for which the computing time would otherwise be prohibitively long. For the complete internal geomagnetic field, the maximum characteristic uncertainty in the geocentric distance of a field line that crosses the geomagnetic equator at a nominal dipolar distance of 2 RE is typically 100 km. The corresponding characteristic uncertainty for a field line that crosses the geomagnetic equator at a nominal dipolar distance of 6 RE is typically 500 km. Histograms and scatter plots showing the characteristic uncertainties associated with magnetic-field-line tracing in the magnetosphere are presented for a range of

  4. Debate on Uncertainty in Estimating Bathing Water Quality

    DEFF Research Database (Denmark)

    Larsen, Torben

    1992-01-01

    Estimating the bathing water quality along the shore near a planned sewage discharge requires data on the source strength of bacteria, the die-off of bacteria and the actual dilution of the sewage. Together these 3 factors give the actual concentration of bacteria on the interesting spots...

  5. Debate on Uncertainty in Estimating Bathing Water Quality

    DEFF Research Database (Denmark)

    Larsen, Torben

    1992-01-01

    Estimating the bathing water quality along the shore near a planned sewage discharge requires data on the source strength of bacteria, the die-off of bacteria and the actual dilution of the sewage. Together these 3 factors give the actual concentration of bacteria on the interesting spots...

  6. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    DEFF Research Database (Denmark)

    Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.

    2012-01-01

    ) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk......Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1...... the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We...

  7. Entropy Evolution and Uncertainty Estimation with Dynamical Systems

    Directory of Open Access Journals (Sweden)

    X. San Liang

    2014-06-01

    Full Text Available This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.

  8. Bias and robustness of uncertainty components estimates in transient climate projections

    Science.gov (United States)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias

  9. Achieving comparable uncertainty estimates with Kalman filters or linear smoothers for bathymetry data

    Science.gov (United States)

    Bourgeois, Brian S.; Elmore, Paul A.; Avera, William E.; Zambo, Samantha J.

    2016-07-01

    This paper examines and contrasts two estimation methods, Kalman filtering and linear smoothing, for creating interpolated data products from bathymetry measurements. Using targeted examples, we demonstrate previously obscured behavior showing the dependence of linear smoothers on the spatial arrangement of the measurements, yielding markedly different estimation results than the Kalman filter. For bathymetry data, we have modified the variance estimates from both the Kalman filter and linear smoothers to obtain comparable estimators for dense data. These comparable estimators produce uncertainty estimates that have statistically insignificant differences via hypothesis testing. Achieving comparable estimation is accomplished by applying the "propagated uncertainty" concept and a numerical realization of Tobler's principle to the measurement data prior to the computation of the estimate. We show new mathematical derivations for these modifications. In addition, we show test results with (a) synthetic data and (b) gridded bathymetry in the area of the Scripps and La Jolla Canyons. Our tenfold cross-validation for case (b) shows that the modified equations create comparable uncertainty for both gridding algorithms with null hypothesis acceptance rates of greater than 99.95% of the data points. In contrast, bilinear interpolation has 10 times the amount of rejection. We then discuss how the uncertainty estimators are, in principle, applicable to interpolate geophysical data other than bathymetry.

  10. Knowing the unknowns: uncertainties in simple estimators of dynamical masses

    CERN Document Server

    Campbell, David J R; Jenkins, Adrian; Eke, Vincent R; Navarro, Julio F; Sawala, Till; Schaller, Matthieu; Fattahi, Azadeh; Oman, Kyle A; Theuns, Tom

    2016-01-01

    The observed stellar kinematics of dispersion-supported galaxies are often used to measure dynamical masses. Recently, several analytical relationships between the stellar line-of-sight velocity dispersion, the projected (2D) or deprojected (3D) half-light radius, and the total mass enclosed within the half-light radius, relying on the spherical Jeans equation, have been proposed. Here, we make use of the APOSTLE cosmological hydrodynamical simulations of the Local Group to test the validity and accuracy of such mass estimators for both dispersion and rotation-supported galaxies, for field and satellite galaxies, and for galaxies of varying masses, shapes, and velocity dispersion anisotropies. We find that the mass estimators of Walker et al. and Wolf et al. are able to recover the masses of dispersion-dominated systems with little systematic bias, but with a one-sigma scatter of 25 and 23 percent, respectively. The error on the estimated mass is dominated by the impact of the 3D shape of the stellar mass dis...

  11. Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method

    Science.gov (United States)

    Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko

    2016-09-01

    Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.

  12. Uncertainty in flood damage estimates and its potential effect on investment decisions

    Science.gov (United States)

    Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.

    2016-01-01

    This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.

  13. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Woody encroachment is a globally occurring phenomenon that is thought to contribute significantly to the global carbon (C) sink. The C contribution needs to be estimated at regional and local scales to address large uncertainties present in the global- and continental-scale estimates and guide regio...

  14. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  15. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy

    DEFF Research Database (Denmark)

    Laugaard Lorenzen, Ebbe; Brink, Carsten; Taylor, Carolyn W.;

    2016-01-01

    heart dose estimated from individual CT-scans varied from 8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable......BACKGROUND AND PURPOSE: We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. MATERIAL AND METHODS: Three tangential radiotherapy regimens were reconstructed using CT...... to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always

  16. Statistical characterization of roughness uncertainty and impact on wind resource estimation

    DEFF Research Database (Denmark)

    Kelly, Mark C.; Ejsing Jørgensen, Hans

    2017-01-01

    In this work we relate uncertainty in background roughness length (z0) to uncertainty in wind speeds, where the latter are predicted at a wind farm location based on wind statistics observed at a different site. Sensitivity of predicted winds to roughness is derived analytically for the industry......-standard European Wind Atlas method, which is based on the geostrophic drag law. We statistically consider roughness and its corresponding uncertainty, in terms of both z0 derived from measured wind speeds as well as that chosen in practice by wind engineers. We show the combined effect of roughness uncertainty...... arising from differing wind-observation and turbine-prediction sites; this is done for the case of roughness bias as well as for the general case. For estimation of uncertainty in annual energy production (AEP), we also develop a generalized analytical turbine power curve, from which we derive a relation...

  17. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    Science.gov (United States)

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  18. Estimating urban flood risk - uncertainty in design criteria

    Science.gov (United States)

    Newby, M.; Franks, S. W.; White, C. J.

    2015-06-01

    The design of urban stormwater infrastructure is generally performed assuming that climate is static. For engineering practitioners, stormwater infrastructure is designed using a peak flow method, such as the Rational Method as outlined in the Australian Rainfall and Runoff (AR&R) guidelines and estimates of design rainfall intensities. Changes to Australian rainfall intensity design criteria have been made through updated releases of the AR&R77, AR&R87 and the recent 2013 AR&R Intensity Frequency Distributions (IFDs). The primary focus of this study is to compare the three IFD sets from 51 locations Australia wide. Since the release of the AR&R77 IFDs, the duration and number of locations for rainfall data has increased and techniques for data analysis have changed. Updated terminology coinciding with the 2013 IFD release has also resulted in a practical change to the design rainfall. For example, infrastructure that is designed for a 1 : 5 year ARI correlates with an 18.13% AEP, however for practical purposes, hydraulic guidelines have been updated with the more intuitive 20% AEP. The evaluation of design rainfall variation across Australia has indicated that the changes are dependent upon location, recurrence interval and rainfall duration. The changes to design rainfall IFDs are due to the application of differing data analysis techniques, the length and number of data sets and the change in terminology from ARI to AEP. Such changes mean that developed infrastructure has been designed to a range of different design criteria indicating the likely inadequacy of earlier developments to the current estimates of flood risk. In many cases, the under-design of infrastructure is greater than the expected impact of increased rainfall intensity under climate change scenarios.

  19. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  20. Impact of meteorological inflow uncertainty on tracer transport and source estimation in urban atmospheres

    Science.gov (United States)

    Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip; Baskett, Ronald L.

    2016-10-01

    A computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Our uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City. The best estimate of the inflow at 50 m above ground for the selected period has a wind speed and direction of 4.6-2.5+2.0 m s-1 and 158.0-23+16 , where the uncertainty is a 95% confidence range. The wind speed values prescribed in previous studies differ from our best estimate by two or more standard deviations. Inflow probabilities are also used to weight backward dispersion plumes and produce a spatial map of likely tracer release locations. For the Oklahoma City case, this map pinpoints the location of the known release to within 20 m. By evaluating the dispersion patterns associated with other likely release locations, we further show that inflow uncertainty can explain the differences between simulated and measured tracer concentrations.

  1. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    Science.gov (United States)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  2. Uncertainty Evaluation of Weibull Estimators through Monte Carlo Simulation: Applications for Crack Initiation Testing

    Directory of Open Access Journals (Sweden)

    Jae Phil Park

    2016-06-01

    Full Text Available The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.

  3. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    Science.gov (United States)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  4. Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development

    Directory of Open Access Journals (Sweden)

    E. Todini

    2001-01-01

    Full Text Available This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the limits of the proposed approximation, the conventional Kriging estimates are shown to be biased for all variograms, the bias depending upon the second order derivatives with respect to the parameters times the variance-covariance matrix of the parameter estimates. A new Maximum Likelihood (ML estimator for semi-variogram parameters in ordinary Kriging, based upon the assumption of a multi-normal distribution of the Kriging cross-validation errors, is introduced as a mean for the estimation of the parameter variance-covariance matrix. Keywords: Kriging, maximum likelihood, parameter estimation, uncertainty

  5. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  6. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  7. Characterization of disdrometer uncertainties and impacts on estimates of snowfall rate and radar reflectivity

    Directory of Open Access Journals (Sweden)

    N. B. Wood

    2013-07-01

    Full Text Available Estimates of snow microphysical properties obtained by analyzing collections of individual particles are often limited to short time scales and coarse time resolution. Retrievals using disdrometer observations coincident with bulk measurements such as radar reflectivity and snowfall amounts may overcome these limitations; however, retrieval techniques using such observations require uncertainty estimates not only for the bulk measurements themselves, but also for the simulated measurements modeled from the disdrometer observations. Disdrometer uncertainties arise due to sampling and analytic errors and to the discrete, potentially truncated form of the reported size distributions. Imaging disdrometers such as the Snowflake Video Imager and 2-D Video Disdrometer provide remarkably detailed representations of snow particles, but view limited projections of their three-dimensional shapes. Particle sizes determined by such instruments underestimate the true dimensions of the particles in a way that depends, in the mean, on particle shape, also contributing to uncertainties. An uncertainty model that accounts for these uncertainties is developed and used to establish their contributions to simulated radar reflectivity and snowfall rate. Viewing geometry effects are characterized by a parameter, φ, that relates disdrometer-observed particle size to the true maximum dimension of the particle. Values and uncertainties for φ are estimated using idealized ellipsoidal snow particles. The model is applied to observations from seven snow events from the Canadian CloudSat CALIPSO Validation Project (C3VP, a mid-latitude cold season cloud and precipitation field experiment. Typical total uncertainties are 4 dBZ for reflectivity and 40–60% for snowfall rate, are highly correlated, and are substantial compared to expected observational uncertainties. The dominant sources of errors are viewing geometry effects and the discrete, truncated form of the size

  8. Climate data induced uncertainty in model-based estimations of terrestrial primary productivity

    Science.gov (United States)

    Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko

    2017-06-01

    Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due

  9. Inverse modeling and uncertainty analysis of potential groundwater recharge to the confined semi-fossil Ohangwena II Aquifer, Namibia

    Science.gov (United States)

    Wallner, Markus; Houben, Georg; Lohe, Christoph; Quinger, Martin; Himmelsbach, Thomas

    2017-07-01

    The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.

  10. Quantifying and Reducing Uncertainty in Estimated Microcystin Concentrations from the ELISA Method.

    Science.gov (United States)

    Qian, Song S; Chaffin, Justin D; DuFour, Mark R; Sherman, Jessica J; Golnick, Phoenix C; Collier, Christopher D; Nummer, Stephanie A; Margida, Michaela G

    2015-12-15

    We discuss the uncertainty associated with a commonly used method for measuring the concentration of microcystin, a group of toxins associated with cyanobacterial blooms. Such uncertainty is rarely reported and accounted for in important drinking water management decisions. Using monitoring data from Ohio Environmental Protection Agency and from City of Toledo, we document the sources of measurement uncertainty and recommend a Bayesian hierarchical modeling approach for reducing the measurement uncertainty. Our analysis suggests that (1) much of the uncertainty is a result of the highly uncertain "standard curve" developed during each test and (2) the uncertainty can be reduced by pooling raw test data from multiple tests. Based on these results, we suggest that estimation uncertainty can be effectively reduced through the effort of either (1) regional regulatory agencies by sharing and combining raw test data from regularly scheduled microcystin monitoring program or (2) the manufacturer of the testing kit by conducting additional tests as part of an effort to improve the testing kit.

  11. Sensitivity and uncertainty analysis of estimated soil hydraulic parameters for simulating soil water content

    Science.gov (United States)

    Gupta, Manika; Garg, Naveen Kumar; Srivastava, Prashant K.

    2014-05-01

    The sensitivity and uncertainty analysis has been carried out for the scalar parameters (soil hydraulic parameters (SHPs)), which govern the simulation of soil water content in the unsaturated soil zone. The study involves field experiments, which were conducted in real field conditions for wheat crop in Roorkee, India under irrigated conditions. Soil samples were taken for the soil profile of 60 cm depth at an interval of 15 cm in the experimental field to determine soil water retention curves (SWRCs). These experimentally determined SWRCs were used to estimate the SHPs by least square optimization under constrained conditions. Sensitivity of the SHPs estimated by various pedotransfer functions (PTFs), that relate various easily measurable soil properties like soil texture, bulk density and organic carbon content, is compared with lab derived parameters to simulate respective soil water retention curves. Sensitivity analysis was carried out using the monte carlo simulations and the one factor at a time approach. The different sets of SHPs, along with experimentally determined saturated permeability, are then used as input parameters in physically based, root water uptake model to ascertain the uncertainties in simulating soil water content. The generalised likelihood uncertainty estimation procedure (GLUE) was subsequently used to estimate the uncertainty bounds (UB) on the model predictions. It was found that the experimentally obtained SHPs were able to simulate the soil water contents with efficiencies of 70-80% at all the depths for the three irrigation treatments. The SHPs obtained from the PTFs, performed with varying uncertainties in simulating the soil water contents. Keywords: Sensitivity analysis, Uncertainty estimation, Pedotransfer functions, Soil hydraulic parameters, Hydrological modelling

  12. Parameter estimation and uncertainty for gravitational waves from binary black holes

    Science.gov (United States)

    Berry, Christopher; LIGO Scientific Collaboration; Virgo Collaboration

    2016-03-01

    Binary black holes are one of the most promising sources of gravitational waves that could be observed by Advanced LIGO. To accurately infer the parameters of an astrophysical signal, it is necessary to have a reliable model of the gravitational waveform. Uncertainty in the waveform leads to uncertainty in the measured parameters. For loud signals, this theoretical uncertainty could dominate statistical uncertainty, to be the primary source of error in gravitational-wave astronomy. However, we expect the first candidate events will be closer to the detection threshold. We look at how parameter estimation would be influenced by the use of different waveform models for a binary black-hole signal near detection threshold, and how this can be folded in to a Bayesian analysis.

  13. Uncertainty of feedback and state estimation determines the speed of motor adaptation

    Directory of Open Access Journals (Sweden)

    Kunlin Wei

    2010-05-01

    Full Text Available Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.

  14. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    Directory of Open Access Journals (Sweden)

    Janet L. Rachlow

    2013-08-01

    Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  15. Estimating U.S. Methane Emissions from the Natural Gas Supply Chain. Approaches, Uncertainties, Current Estimates, and Future Studies

    Energy Technology Data Exchange (ETDEWEB)

    Heath, Garvin [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Warner, Ethan [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Steinberg, Daniel [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Brandt, Adam [Stanford Univ., CA (United States)

    2015-08-01

    A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH4 emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain.

  16. Uncertainties in Estimates of the Risks of Late Effects from Space Radiation

    Science.gov (United States)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.

    2002-01-01

    The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.

  17. Gravimetric dilution of calibration gas mixtures (CO2, CO, and CH4 in He balance): Toward their uncertainty estimation

    Science.gov (United States)

    Budiman, Harry; Mulyana, Muhammad Rizky; Zuas, Oman

    2017-01-01

    Uncertainty estimation for the gravimetric dilution of four calibration gas mixtures [carbon dioxide (CO2), carbon monoxide (CO), and methane (CH4) in helium (He) Balance] have been carried out according to the International Organization for Standardization (ISO) of "Guide to the Expression of Uncertainty in Measurement". The uncertainty of the composition of gas mixtures was evaluated to measure the quality, reliability, and comparability of the prepared calibration gas mixtures. The analytical process for the uncertainty estimation is comprised of four main stages such as specification of measurand, identification, quantification of the relevant uncertainty sources, and combination of the individual uncertainty sources. In this study, important uncertainty sources including weighing, gas cylinder, component gas, certified calibration gas mixture (CCGM) added, and purity of the He balance were examined to estimate the final uncertainty of composition of diluted calibration gas mixtures. The results shows that the uncertainties of gravimetric dilution of the four calibration gas mixtures (CO2, CO, and CH4 in He Balance) were found in the range of 5.974% - 7.256% that were expressed as %relative of expanded uncertainty at 95% of confidence level (k=2). The major contribution of sources uncertainty to the final uncertainty arose from the uncertainty related to the certified calibration gas mixture (CCGM) which was the uncertainty value stated in the CCGM certificate. The verification of calibration gas mixtures composition shows that the gravimetric values of calibration gas mixtures were consistent with the results of measurement using gas chromatography flame ionization detector equipped by methanizer.

  18. Sensitivity of process design to uncertainties in property estimates applied to extractive distillation

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Hukkerikar, Amol; Sin, Gürkan;

    through the calculation steps to such an extent that the final design might not be feasible or lead to poor performance. Therefore it is necessary to evaluate the sensitivity of process design to the uncertainties in property estimates obtained from thermo-physical property models. Uncertainty...... and sensitivity analysis can be combined to determine which properties are of critical importance from process design point of view and to establish an acceptable level of accuracy for different thermo-physical property methods employed. This helps the user to determine if additional property measurements...... in the laboratory are required or to find more accurate values in the literature. A tailor-made and more efficient experimentation schedule is the result. This work discusses a systematic methodology for performing analysis of sensitivity of process design to uncertainties in property estimates. The application...

  19. Estimating the Uncertainty of Tensile Strength Measurement for A Photocured Material Produced by Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Adamczak Stanisław

    2014-08-01

    Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.

  20. Uncertainty of Coupled Soil-Vegetation-Atmosphere Modelling Methods for Estimating Groundwater Recharge

    Science.gov (United States)

    Xie, Y.; Cook, P. G.; Simmons, C. T.; Partington, D.; Crosbie, R.; Batelaan, O.

    2016-12-01

    Coupled soil-vegetation-atmosphere models have become increasingly popular for estimating groundwater recharge, because of the integration of carbon, energy and water balances. The carbon and energy balances act to constrain the water balance and as a result should reduce the uncertainty of groundwater recharge estimates. However, the addition of carbon and energy balances also introduces a large number of plant physiological parameters which complicates the estimation of groundwater recharge. Moreover, this method often relies on existing pedotransfer functions to derive soil water retention curve parameters and saturated hydraulic conductivity from soil attribute data. The choice of a pedotransfer function is usually subjective and several pedotransfer functions may be fit for the purpose. These different pedotransfer functions (and thus the uncertainty of soil water retention curve parameters and saturated hydraulic conductivity) are likely to increase the prediction uncertainty of recharge estimates. In this study, we aim to assess the potential uncertainty of groundwater recharge when using a coupled soil-vegetation-atmosphere modelling method. The widely used WAter Vegetation Energy and Solute (WAVES) modelling code was used to perform simulations of different water balances in order to estimate groundwater recharge in the Campaspe catchment in southeast Australia. We carefully determined the ranges of the vegetation parameters based upon a literature review. We also assessed a number of existing pedotransfer functions and selected the four most appropriate. Then the Monte Carlo analysis approach was employed to examine potential uncertainties introduced by different types of errors. Preliminary results suggest that for a mean rainfall of about 500 mm/y and annual pasture vegetation, the estimated recharge may range from 10 to 150 mm/y due to the uncertainty in vegetation parameters. This upper bound of the recharge range may double to 300 mm/y if different

  1. Certain uncertainty: using pointwise error estimates in super-resolution microscopy

    CERN Document Server

    Lindén, Martin; Amselem, Elias; Elf, Johan

    2016-01-01

    Point-wise localization of individual fluorophores is a critical step in super-resolution microscopy and single particle tracking. Although the methods are limited by the accuracy in localizing individual flourophores, this point-wise accuracy has so far only been estimated by theoretical best case approximations, disregarding for example motional blur, out of focus broadening of the point spread function and time varying changes in the fluorescence background. Here, we show that pointwise localization uncertainty can be accurately estimated directly from imaging data using a Laplace approximation constrained by simple mircoscope properties. We further demonstrate that the estimated localization uncertainty can be used to improve downstream quantitative analysis, such as estimation of diffusion constants and detection of changes in molecular motion patterns. Most importantly, the accuracy of actual point localizations in live cell super-resolution microscopy can be improved beyond the information theoretic lo...

  2. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  3. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke , B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  4. Revised cost savings estimate with uncertainty for enhanced sludge washing of underground storage tank waste

    Energy Technology Data Exchange (ETDEWEB)

    DeMuth, S.

    1998-09-01

    Enhanced Sludge Washing (ESW) has been selected to reduce the amount of sludge-based underground storage tank (UST) high-level waste at the Hanford site. During the past several years, studies have been conducted to determine the cost savings derived from the implementation of ESW. The tank waste inventory and ESW performance continues to be revised as characterization and development efforts advance. This study provides a new cost savings estimate based upon the most recent inventory and ESW performance revisions, and includes an estimate of the associated cost uncertainty. Whereas the author`s previous cost savings estimates for ESW were compared against no sludge washing, this study assumes the baseline to be simple water washing which more accurately reflects the retrieval activity along. The revised ESW cost savings estimate for all UST waste at Hanford is $6.1 B {+-} $1.3 B within 95% confidence. This is based upon capital and operating cost savings, but does not include development costs. The development costs are assumed negligible since they should be at least an order of magnitude less than the savings. The overall cost savings uncertainty was derived from process performance uncertainties and baseline remediation cost uncertainties, as determined by the author`s engineering judgment.

  5. Balancing uncertainty of context in ERP project estimation: an approach and a case study

    NARCIS (Netherlands)

    Daneva, Maia

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -

  6. Managing Uncertainty in ERP Project Estimation Practice: An Industrial Case Study

    NARCIS (Netherlands)

    Daneva, Maia; Jedlitschka, A.; Salo, O.

    2008-01-01

    Uncertainty is a crucial element in managing projects. This paper’s aim is to shed some light into the issue of uncertain context factors when estimating the effort needed for implementing enterprise resource planning (ERP) projects. We outline a solution approach to this issue. It complementarily

  7. Estimating uncertainty and reliability of social network data using Bayesian inference.

    Science.gov (United States)

    Farine, Damien R; Strandburg-Peshkin, Ariana

    2015-09-01

    Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.

  8. Balancing uncertainty of context in ERP project estimation: an approach and a case 3 study

    NARCIS (Netherlands)

    Daneva, Maya

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach - a

  9. Measuring Cross-Section and Estimating Uncertainties with the fissionTPC

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-30

    The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.

  10. Uncertainty in peat volume and soil carbon estimated using ground-penetrating radar and probing

    Science.gov (United States)

    Andrew D. Parsekian; Lee Slater; Dimitrios Ntarlagiannis; James Nolan; Stephen D. Sebestyen; Randall K. Kolka; Paul J. Hanson

    2012-01-01

    Estimating soil C stock in a peatland is highly dependent on accurate measurement of the peat volume. In this study, we evaluated the uncertainty in calculations of peat volume using high-resolution data to resolve the three-dimensional structure of a peat basin based on both direct (push probes) and indirect geophysical (ground-penetrating radar) measurements. We...

  11. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    Science.gov (United States)

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  12. Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach

    Science.gov (United States)

    Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel

    2014-05-01

    Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by

  13. Investigating the impact of data uncertainty on the estimation of catchment nutrient fluxes.

    Science.gov (United States)

    Lloyd, Charlotte; Freer, Jim; Collins, Adrian; Johnes, Penny; Coxon, Gemma

    2014-05-01

    Changing climate and a growing population are increasing pressures on the world's water bodies. Maintaining food security has resulted in changes in agricultural practices, leading to adverse impacts on water quality. To address this problem robust evidence is needed to determine which on-farm mitigation strategies are likely to be most effective in reducing pollutant impacts. The introduction of in-situ quasi-continuous monitoring of water quality provides the means to improve the characterisation of pollutant behaviour and gain new and more robust understanding of hydrological and biogeochemical flux behaviours in catchments. Here we analyse a suite of high temporal resolution data sets generated from in-situ sensor networks within an uncertainty framework to provide robust estimates of nutrient fluxes from catchments impacted by intensive agricultural production practices. Previous research into nutrient flux estimation has focused on assessing the uncertainty associated with the use of different load models to interpolate or extrapolate nutrient data where daily or sub-daily discharge data are generally available and used with lower resolution nutrient concentrations. In such studies examples of datasets where paired discharge and nutrient concentrations are available are used as a benchmark of 'truth' against which the other data models or sample resolutions are tested. This work illustrates that even given high temporal-resolution paired datasets, where no load model is necessary, there will still be significant uncertainties and therefore demonstrates the importance of analysing such data within an uncertainty framework to obtain robust estimates of catchment nutrient loads. This study uses 15-minute resolution paired velocity and stage height data, in order to calculate river discharge, along with high temporal resolution (15 or 30 minute) nutrient data from four field sites collected as part of the Hampshire Avon Demonstration Test Catchment project

  14. Estimation of measuring uncertainty for optical micro-coordinate measuring machine

    Institute of Scientific and Technical Information of China (English)

    Kang Song(宋康); Zhuangde Jiang(蒋庄德)

    2004-01-01

    Based on the evaluation principle of the measuring uncertainty of the traditional coordinate measuring machine (CMM), the analysis and evaluation of the measuring uncertainty for optical micro-CMM have been made. Optical micro-CMM is an integrated measuring system with optical, mechanical, and electronic components, which may influence the measuring uncertainty of the optical micro-CMM. If the influence of laser speckle is taken into account, its longitudinal measuring uncertainty is 2.0 μm, otherwise it is 0.88 μm. It is proved that the estimation of the synthetic uncertainty for optical micro-CMM is correct and reliable by measuring the standard reference materials and simulating the influence of the diameter of laser beam. With Heisenberg's uncertainty principle and quantum mechanics theory, a method for improving the measuring accuracy of optical micro-CMM through adding a diaphragm in the receiving terminal of the light path was proposed, and the measuring results are verified by experiments.

  15. Estimating uncertainties in the newly developed multi-source land snow data assimilation system

    Science.gov (United States)

    Zhang, Yong-Fei; Yang, Zong-Liang

    2016-07-01

    The snow simulations from the recently developed multivariate land snow data assimilation system (SNODAS) for the Northern Hemisphere are assessed with regard to uncertainties in atmospheric forcing, model structure, data assimilation technique, and satellite remote sensing product. The SNODAS consists of the Data Assimilation Research Testbed (DART) and the Community Land Model version 4 (CLM4). A series of experiments are conducted to estimate each of the above uncertainty sources. The experiments include several open-loop model cases and data assimilation cases that assimilate the snow cover fraction (SCF) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE). The atmospheric forcing uncertainty in terms of precipitation and radiation is found to be the largest among the various uncertainty sources examined, especially over the Tibetan Plateau (TP) and most of the mid- and high-latitudes. Model structure and choice of data assimilation technique are also two big sources of uncertainty in SNODAS. The uncertainty of model structure is represented by two different parameterizations of SCF. The density-based SCF scheme (as used in CLM4) generally results in better snow simulations than does the stochastic SCF scheme (as in CLM4.5) within the data assimilation framework. The choice of TWS products retrieved from GRACE has relatively the least impact on the snow data assimilation.

  16. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    Energy Technology Data Exchange (ETDEWEB)

    Rypdal, Kristin; Baritz, Rainer

    2002-07-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  17. A new evaluation of the uncertainty associated with CDIAC estimates of fossil fuel carbon dioxide emission

    Directory of Open Access Journals (Sweden)

    Robert J. Andres

    2014-07-01

    Full Text Available Three uncertainty assessments associated with the global total of carbon dioxide emitted from fossil fuel use and cement production are presented. Each assessment has its own strengths and weaknesses and none give a full uncertainty assessment of the emission estimates. This approach grew out of the lack of independent measurements at the spatial and temporal scales of interest. Issues of dependent and independent data are considered as well as the temporal and spatial relationships of the data. The result is a multifaceted examination of the uncertainty associated with fossil fuel carbon dioxide emission estimates. The three assessments collectively give a range that spans from 1.0 to 13% (2 σ. Greatly simplifying the assessments give a global fossil fuel carbon dioxide uncertainty value of 8.4% (2 σ. In the largest context presented, the determination of fossil fuel emission uncertainty is important for a better understanding of the global carbon cycle and its implications for the physical, economic and political world.

  18. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget and...

  19. Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    CERN Document Server

    Kuusela, Mikael

    2015-01-01

    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inh...

  20. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance–covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  1. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    Science.gov (United States)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  2. Estimation of the uncertainty in a multiresidue method for the determination of pesticide residues in fruit and vegetables

    DEFF Research Database (Denmark)

    Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael

    2003-01-01

    . In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit......The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many...

  3. Modelling and Measurement Uncertainty Estimation for Integrated AFM-CMM Instrument

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Bariani, Paolo; De Chiffre, Leonardo

    2005-01-01

    This paper describes modelling of an integrated AFM - CMM instrument, its calibration, and estimation of measurement uncertainty. Positioning errors were seen to limit the instrument performance. Software for off-line stitching of single AFM scans was developed and verified, which allows...... compensation of such errors. A geometrical model of the instrument was produced, describing the interaction between AFM and CMM systematic errors. The model parameters were quantified through calibration, and the model used for establishing an optimised measurement procedure for surface mapping. A maximum...... uncertainty of 0.8% was achieved for the case of surface mapping of 1.2*1.2 mm2 consisting of 49 single AFM scanned areas....

  4. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  5. Estimating the Contribution of Impurities to the Uncertainty of Metal Fixed-Point Temperatures

    Science.gov (United States)

    Hill, K. D.

    2014-04-01

    The estimation of the uncertainty component attributable to impurities remains a central and important topic of fixed-point research. Various methods are available for this estimation, depending on the extent of the available information. The sum of individual estimates method has considerable appeal where there is adequate knowledge of the sensitivity coefficients for each of the impurity elements and sufficiently low uncertainty regarding their concentrations. The overall maximum estimate (OME) forsakes the behavior of the individual elements by assuming that the cryoscopic constant adequately represents (or is an upper bound for) the sensitivity coefficients of the individual impurities. Validation of these methods using melting and/or freezing curves is recommended to provide confidence. Recent investigations of indium, tin, and zinc fixed points are reported. Glow discharge mass spectrometry was used to determine the impurity concentrations of the metals used to fill the cells. Melting curves were analyzed to derive an experimental overall impurity concentration (assuming that all impurities have a sensitivity coefficient equivalent to that of the cryoscopic constant). The two values (chemical and experimental) for the overall impurity concentrations were then compared. Based on the data obtained, the pragmatic approach of choosing the larger of the chemical and experimentally derived quantities as the best estimate of the influence of impurities on the temperature of the freezing point is suggested rather than relying solely on the chemical analysis and the OME method to derive the uncertainty component attributable to impurities.

  6. Soil Organic Carbon Density in Hebei Province, China:Estimates and Uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-Cun; SHI Xue-Zheng; YU Dong-Sheng; T. F. PAGELLA; SUN Wei-Xia; XU Xiang-Hua

    2005-01-01

    In order to improve the precision of soil organic carbon (SOC) estimates, the sources of uncertainty in soil organic carbon density (SOCD) estimates and SOC stocks were examined using 363 soil profiles in Hebei Province, China, with three methods: the soil profile statistics (SPS), GIS-based soil type (GST), and kriging interpolation (KI). The GST method, utilizing both pedological professional knowledge and GIS technology, was considered the most accurate method of the three estimations, with SOCD estimates for SPS 10% lower and KI 10% higher. The SOCD range for GST was 84% wider than KI as KI smoothing effect narrowed the SOCD range. Nevertheless, the coefficient of variation for SOCD with KI (41.7%) was less than GST and SPS. Comparing SOCD's lower estimates for SPS versus GST, the major sources of uncertainty were the conflicting area of proportional relations. Meanwhile, the fewer number of soil profiles and the necessity of using the smoothing effect with KI were its sources of uncertainty. Moreover, for local detailed variations of SOCD, GST was more advantageous in reflecting the distribution pattern than KI.

  7. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-03-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences yet, few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from different parameter uncertainty estimation methods. The Generalized Uncertainty Likelihood Estimator (GLUE, a modified version of GLUE, and the Shuffle Complex Evolution Metropolis (SCEM are used to generate model ensembles for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of parameter uncertainty, one that is commensurate with the dimension of the ensembles themselves. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  8. On the evaluation of uncertainties for state estimation with the Kalman filter

    Science.gov (United States)

    Eichstädt, S.; Makarava, N.; Elster, C.

    2016-12-01

    The Kalman filter is an established tool for the analysis of dynamic systems with normally distributed noise, and it has been successfully applied in numerous areas. It provides sequentially calculated estimates of the system states along with a corresponding covariance matrix. For nonlinear systems, the extended Kalman filter is often used. This is derived from the Kalman filter by linearization around the current estimate. A key issue in metrology is the evaluation of the uncertainty associated with the Kalman filter state estimates. The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) and its supplements serve as the de facto standard for uncertainty evaluation in metrology. We explore the relationship between the covariance matrix produced by the Kalman filter and a GUM-compliant uncertainty analysis. In addition, the results of a Bayesian analysis are considered. For the case of linear systems with known system matrices, we show that all three approaches are compatible. When the system matrices are not precisely known, however, or when the system is nonlinear, this equivalence breaks down and different results can then be reached. For precisely known nonlinear systems, though, the result of the extended Kalman filter still corresponds to the linearized uncertainty propagation of the GUM. The extended Kalman filter can suffer from linearization and convergence errors. These disadvantages can be avoided to some extent by applying Monte Carlo procedures, and we propose such a method which is GUM-compliant and can also be applied online during the estimation. We illustrate all procedures in terms of a 2D dynamic system and compare the results with those obtained by particle filtering, which has been proposed for the approximate calculation of a Bayesian solution. Finally, we give some recommendations based on our findings.

  9. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  10. Uncertainty estimation of simulated water levels for the Mitch flood event in Tegucigalpa

    Science.gov (United States)

    Fuentes Andino, Diana Carolina; Halldin, Sven; Keith, Beven; Chong-Yu, Xu

    2013-04-01

    Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Due to the extremely large magnitude of the Mitch flood, hydrometric measurements were not taken during the event. However, post-event indirect measurements of the discharge were obtained by the U.S. Geological Survey (USGS) and post-event surveyed high water marks were obtained by the Japan International Cooperation agency (JICA). This work proposes a methodology to simulate the water level during the Mitch event when the available data is associated with large uncertainty. The results of the two-dimensional hydrodynamic model LISFLOOD-FP will be evaluated using the Generalized Uncertainty Estimation (GLUE) framework. The main challenge in the proposed methodology is to formulate an approach to evaluate the model results when there are large uncertainties coming from both the model parameters and the evaluation data.

  11. Decision Support and Robust Estimation of Uncertainty in Carbon Stocks and Fluxes

    Science.gov (United States)

    Hagen, S. C.; Braswell, B. H.; Saatchi, S. S.; Woodall, C. W.; Salas, W.; Ganguly, S.; Harris, N.

    2013-12-01

    The primary goal of our project (NASA Carbon Monitoring System - Saatchi PI) is to create detailed maps of forest carbon stocks and stock changes across the US to assist with national GHG inventories and thereby support decisions associated with land management. A comprehensive and accurate assessment of uncertainty in the forest carbon stock and stock change products is critical for understanding the quantitative limits of the products and for ensuring their usefulness to the broader community. However, a rigorous estimate of uncertainty at the pixel level is challenging to produce for complex products generated from multiple sources of input data and models. Here, we put forth a roadmap for assessing uncertainty associated with the forest carbon products provided as part of this project, which are generated by combining several sources of measurements and models. We also present preliminary results.

  12. Uncertainty estimation by Bayesian approach in thermochemical conversion of walnut hull and lignite coal blends.

    Science.gov (United States)

    Buyukada, Musa

    2017-05-01

    The main purpose of the present study was to incorporate the uncertainties in the thermal behavior of walnut hull (WH), lignite coal, and their various blends using Bayesian approach. First of all, thermal behavior of related materials were investigated under different temperatures, blend ratios, and heating rates. Results of ultimate and proximate analyses showed the main steps of oxidation mechanism of (co-)combustion process. Thermal degradation started with the (hemi-)cellulosic compounds and finished with lignin. Finally, a partial sensitivity analysis based on Bayesian approach (Markov Chain Monte Carlo simulations) were applied to data driven regression model (the best fit). The main purpose of uncertainty analysis was to point out the importance of operating conditions (explanatory variables). The other important aspect of the present work was the first performance evaluation study on various uncertainty estimation techniques in (co-)combustion literature.

  13. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-03-06

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision

  14. Group-contribution+ (GC+) based estimation of properties of pure components: Improved property estimation and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Ten Kate, Antoon

    2012-01-01

    The aim of this work is to present revised and improved model parameters for group-contribution+ (GC+) models (combined group-contribution (GC) method and atom connectivity index (CI) method) employed for the estimation of pure component properties, together with covariance matrices to quantify.......g. prediction errors in terms of 95% confidence intervals) in the estimated property values. This feature allows one to evaluate the effects of these uncertainties on product-process design, simulation and optimization calculations, contributing to better-informed and more reliable engineering solutions. (C...

  15. Geostatistical techniques applied to mapping limnological variables and quantify the uncertainty associated with estimates

    Directory of Open Access Journals (Sweden)

    Cristiano Cigagna

    2015-12-01

    Full Text Available Abstract Aim: This study aimed to map the concentrations of limnological variables in a reservoir employing semivariogram geostatistical techniques and Kriging estimates for unsampled locations, as well as the uncertainty calculation associated with the estimates. Methods: We established twenty-seven points distributed in a regular mesh for sampling. Then it was determined the concentrations of chlorophyll-a, total nitrogen and total phosphorus. Subsequently, a spatial variability analysis was performed and the semivariogram function was modeled for all variables and the variographic mathematical models were established. The main geostatistical estimation technique was the ordinary Kriging. The work was developed with the estimate of a heavy grid points for each variables that formed the basis of the interpolated maps. Results: Through the semivariogram analysis was possible to identify the random component as not significant for the estimation process of chlorophyll-a, and as significant for total nitrogen and total phosphorus. Geostatistical maps were produced from the Kriging for each variable and the respective standard deviations of the estimates calculated. These measurements allowed us to map the concentrations of limnological variables throughout the reservoir. The calculation of standard deviations provided the quality of the estimates and, consequently, the reliability of the final product. Conclusions: The use of the Kriging statistical technique to estimate heavy mesh points associated with the error dispersion (standard deviation of the estimate, made it possible to make quality and reliable maps of the estimated variables. Concentrations of limnological variables in general were higher in the lacustrine zone and decreased towards the riverine zone. The chlorophyll-a and total nitrogen correlated comparing the grid generated by Kriging. Although the use of Kriging is more laborious compared to other interpolation methods, this

  16. Carrying-over toxicokinetic model uncertainty into cancer risk estimates. The TCDD example

    Energy Technology Data Exchange (ETDEWEB)

    Edler, L. [Division of Biostatistics, German Cancer Research Center, Heidelberg (Germany); Heinzl, H.; Mittlboeck, M. [Medical Univ. of Vienna (Austria). Dept. of Medical Computer Sciences

    2004-09-15

    Estimation of human cancer risks depends on the assessment of exposure to the investigated hazardous compound as well as on its toxicokinetic and toxicodynamic in the body. Modeling these processes constitutes a basic prerequisite for any quantitative risk assessment including assessment of the uncertainty of risk estimates. Obviously, the modeling process itself is part of the risk assessment task, and it affects the development of valid risk estimates. Due to the wealth of information available on exposure and effects in humans and animals 2,3,7,8 tetrachlorodibenzo-pdioxin (TCDD) provides an excellent example to elaborate methods which allow a quantitative analysis of the uncertainty of TCDD risk estimates, and which show how toxicokinetic model uncertainty carries over to risk estimate uncertainty and uncertainty of the dose-response relationship. Cancer is usually considered as a slowly evolving disease. An increase in TCDD dose may result in an increase of the observable cancer response not until some latency time period has elapsed. This fact needs careful consideration when a dose-response relationship is to be established. Toxicokinetic models are capable to reconstruct TCDD exposure concentrations during a lifetime such that time-dependent TCDD dose metrics like the area under the concentration-time curve (AUC) can be constructed for each individual cohort member. Two potentially crucial model assumptions for estimating the exposure of a person are the assumption of lifetime constancy of total lipid volume (TLV) of the human body and the assumption of a simple linear kinetic of TCDD elimination. In 1995 a modified Michaelis-Menten kinetic (also known as Carrier kinetic) has been suggested to link the TCDD elimination rate to the available TCDD amount in the body. That is, TCDD elimination would be faster, of nearly the same rate, or slower under this kinetic than under a simple linear kinetic when the individual would be highly, moderately, or slightly

  17. Estimating uncertainty of emissions inventories: What has been done/what needs to be done

    Energy Technology Data Exchange (ETDEWEB)

    Benkovitz, C.M.

    1998-10-01

    Developing scientifically defensible quantitative estimates of the uncertainty of atmospheric emissions inventories has been a gleam in researchers eyes since atmospheric chemical transport and transformation models (CTMs) started to be used to study air pollution. Originally, the compilation of these inventories was done as part of the development and application of the models by researchers whose expertise usually did not include the art of emissions estimations. In general, the smaller the effort spent on compiling the inventories the more effort could be placed on the model development, application and analysis. Yet model results are intimately tied to the accuracy of the emissions data; no model, however accurately the atmospheric physical and chemical processes are represented, will give reliable representation of air concentrations if the emissions data are flawed. The author briefly summarizes some of the work done to develop quantitative estimates of the uncertainty of emissions inventories. The author then presents what is needed to develop scientifically defensible quantitative estimates of the uncertainties of emissions data.

  18. Estimated Uncertainties in the Idaho National Laboratory Matched-Index-of-Refraction Lower Plenum Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson

    2007-11-01

    The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties.

  19. Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa

    Science.gov (United States)

    Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.

    2012-12-01

    Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.

  20. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    Science.gov (United States)

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure.

  1. ESTIMATION OF UNCERTAINTY AND VALIDATION OF ANALYTICAL PROCEDURES AS A QUALITY CONTROL TOOL THE EVALUATION OF UNCERTAINTY FOR AMINO ACID ANALYSIS WITH ION-EXCHANGE CHROMATOGRAPHY – CASE STUDY

    Directory of Open Access Journals (Sweden)

    Barbara Mickowska

    2013-02-01

    Full Text Available The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95% varied in range from 9.03% to 12.68%. Quantification of the uncertainty components indicates that the contribution of the calibration concentration uncertainty is the largest and it plays the most important role in the overall uncertainty in amino acid analysis. It is followed by uncertainty of area of chromatographic peaks and weighing procedure of samples. The uncertainty of sample volume and calibration peak area may be negligible. The comparison of CV% with estimated relative uncertainty indicates that interpretation of research results can be misled without uncertainty estimation.

  2. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.

    Science.gov (United States)

    Chew, Gina; Walczyk, Thomas

    2012-03-01

    Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.

  3. Nonlinear approximation with dictionaries,.. II: Inverse estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    In this paper we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for separated decomposable dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal mutually...

  4. Nonlinear approximation with dictionaries. II. Inverse Estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2006-01-01

    In this paper, which is the sequel to [16], we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for blockwise incoherent dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal...

  5. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    Science.gov (United States)

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matthew; Thurber, Clifford H.; Tung, Sui

    2016-04-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  6. Cramér-Rao analysis of orientation estimation: influence of target model uncertainties

    Science.gov (United States)

    Gerwe, David R.; Hill, Jennifer L.; Idell, Paul S.

    2003-05-01

    We explore the use of Cramér-Rao bound calculations for predicting fundamental limits on the accuracy with which target characteristics can be determined by using imaging sensors. In particular, estimation of satellite orientation from high-resolution sensors is examined. The analysis role that such bounds provide for sensor/experiment design, operation, and upgrade is discussed. Emphasis is placed on the importance of including all relevant target/sensor uncertainties in the analysis. Computer simulations are performed that illustrate that uncertainties in target features (e.g., shape, reflectance, and relative orientation) have a significant impact on the bounds and provide considerable insight as to how details of the three-dimensional target structure may influence the estimation process. The simulations also address the impact that a priori information has on the bounds.

  7. GPz: Non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshift

    CERN Document Server

    Almosallam, Ibrahim A; Roberts, Stephen J

    2016-01-01

    The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates of photometric redshifts is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty (and noise) in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform da...

  8. Cascading uncertainties in flood inundation models to uncertain estimates of damage and loss

    Science.gov (United States)

    Fewtrell, Timothy; Michel, Gero; Ntelekos, Alexandros; Bates, Paul

    2010-05-01

    The complexity of flood processes, particularly in urban environments, and the difficulties of collecting data during flood events, presents significant and particular challenges to modellers, especially when considering large geographic areas. As a result, the modelling process incorporates a number of areas of uncertainty during model conceptualisation, construction and evaluation. There is a wealth of literature detailing the relative magnitudes of uncertainties in numerical flood input data (e.g. boundary conditions, model resolution and friction specification) for a wide variety of flood inundation scenarios (e.g. fluvial inundation and surface water flooding). Indeed, recent UK funded projects (e.g. FREE) have explicitly examined the effect of cascading uncertainties in ensembles of GCM output through rainfall-runoff models to hydraulic flood inundation models. However, there has been little work examining the effect of cascading uncertainties in flood hazard ensembles to estimates of damage and loss, the quantity of interest when assessing flood risk. Furthermore, vulnerability is possibly the largest area of uncertainty for (re-)insurers as in-depth and reliable of knowledge of portfolios is difficult to obtain. Insurance industry CAT models attempt to represent a credible range of flood events over large geographic areas and as such examining all sources of uncertainty is not computationally tractable. However, the insurance industry is also marked by a trend towards an increasing need to understand the variability in flood loss estimates derived from these CAT models. In order to assess the relative importance of uncertainties in flood inundation models and depth/damage curves, hypothetical 1-in-100 and 1-in-200 year return period flood events are propagated through the Greenwich embayment in London, UK. Errors resulting from topographic smoothing, friction specification and inflow boundary conditions are cascaded to form an ensemble of flood levels and

  9. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  10. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.; Sako, M. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N. [LPNHE, CNRS/IN2P3, Université Pierre et Marie Curie Paris 6, Universié Denis Diderot Paris 7, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Kessler, R.; Frieman, J. A. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Marriner, J. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Biswas, R.; Kuhlmann, S. [Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Schneider, D. P., E-mail: kessler@kicp.chicago.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  11. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    Science.gov (United States)

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  12. Estimation of Nonlinear Functions of State Vector for Linear Systems with Time-Delays and Uncertainties

    Directory of Open Access Journals (Sweden)

    Il Young Song

    2015-01-01

    Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.

  13. Characterization of the uncertainty of divergence time estimation under relaxed molecular clock models using multiple loci.

    Science.gov (United States)

    Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng

    2015-03-01

    Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way

  14. Influence of seismicity parameter uncertainty on seismic hazard estimation of cities and towns

    Institute of Scientific and Technical Information of China (English)

    黄玮琼; 吴宣

    2003-01-01

    The influence on seismic hazard estimation for 310 cities and towns in the whole nation are studied in particular,owing to uncertainty of seismicity parameters caused by non-uniqueness in selecting statistical time ranges. Andthe regional sketch maps of the average varying values of intensity and the average relative varying values of peakacceleration with different probability of exceedance in 50 years are drawn in the Chinese mainland.

  15. Uncertainty-based Estimation of the Secure Range for ISO New England Dynamic Interchange Adjustment

    Energy Technology Data Exchange (ETDEWEB)

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di; Hou, Zhangshuan; Sun, Yannan; Maslennikov, S.; Luo, Xiaochuan; Zheng, T.; George, S.; Knowland, T.; Litvinov, E.; Weaver, S.; Sanchez, E.

    2014-04-14

    The paper proposes an approach to estimate the secure range for dynamic interchange adjustment, which assists system operators in scheduling the interchange with neighboring control areas. Uncertainties associated with various sources are incorporated. The proposed method is implemented in the dynamic interchange adjustment (DINA) tool developed by Pacific Northwest National Laboratory (PNNL) for ISO New England. Simulation results are used to validate the effectiveness of the proposed method.

  16. Estimation of inferential uncertainty in assessing expert segmentation performance from STAPLE.

    Science.gov (United States)

    Commowick, Olivier; Warfield, Simon K

    2010-03-01

    The evaluation of the quality of segmentations of an image, and the assessment of intra- and inter-expert variability in segmentation performance, has long been recognized as a difficult task. For a segmentation validation task, it may be effective to compare the results of an automatic segmentation algorithm to multiple expert segmentations. Recently an expectation-maximization (EM) algorithm for simultaneous truth and performance level estimation (STAPLE) was developed to this end to compute both an estimate of the reference standard segmentation and performance parameters from a set of segmentations of an image. The performance is characterized by the rate of detection of each segmentation label by each expert in comparison to the estimated reference standard. This previous work provides estimates of performance parameters,but does not provide any information regarding the uncertainty of the estimated values. An estimate of this inferential uncertainty, if available, would allow the estimation of confidence intervals for the values of the parameters. This would facilitate the interpretation of the performance of segmentation generators and help determine if sufficient data size and number of segmentations have been obtained to precisely characterize the performance parameters. We present a new algorithm to estimate the inferential uncertainty of the performance parameters for binary and multi-category segmentations. It is derived for the special case of the STAPLE algorithm based on established theory for general purpose covariance matrix estimation for EM algorithms. The bounds on the performance parameters are estimated by the computation of the observed information matrix.We use this algorithm to study the bounds on performance parameters estimates from simulated images with specified performance parameters, and from interactive segmentations of neonatal brain MRIs. We demonstrate that confidence intervals for expert segmentation performance parameters can be

  17. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  18. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  19. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  20. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardgree, Stuart; Strand, Eva

    2013-01-01

    Woody encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p 2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 – 143.6 kg and 0.5 – 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  1. Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher; H. Horn, Jan-Tore

    2016-01-01

    Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue...... estimations may be improved for the same computational efforts.The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending...

  2. The impact of a and b value uncertainty on loss estimation in the reinsurance industry

    Directory of Open Access Journals (Sweden)

    R. Streit

    2000-06-01

    Full Text Available In the reinsurance industry different probabilistic models are currently used for seismic risk analysis. A credible loss estimation of the insured values depends on seismic hazard analysis and on the vulnerability functions of the given structures. Besides attenuation and local soil amplification, the earthquake occurrence model (often represented by the Gutenberg and Richter relation is a key element in the analysis. However, earthquake catalogues are usually incomplete, the time of observation is too short and the data themselves contain errors. Therefore, a and b values can only be estimated with uncertainties. The knowledge of their variation provides a valuable input for earthquake risk analysis, because they allow the probability distribution of expected losses (expressed by Average Annual Loss (AAL to be modelled. The variations of a and b have a direct effect on the estimated exceeding probability and consequently on the calculated loss level. This effect is best illustrated by exceeding probability versus loss level and AAL versus magnitude graphs. The sensitivity of average annual losses due to different a to b ratios and magnitudes is obvious. The estimation of the variation of a and b and the quantification of the sensitivity of calculated losses are fundamental for optimal earthquake risk management. Ignoring these uncertainties means that risk management decisions neglect possible variations of the earthquake loss estimations.

  3. Uncertainty Estimation of Metals and Semimetals Determination in Wastewater by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES)

    Science.gov (United States)

    Marques, J. R.; Villa-Soares, S. M.; Stellato, T. B.; Silva, T. B. S. C.; Faustino, M. G.; Monteiro, L. R.; Pires, M. A. F.; Cotrim, M. E. B.

    2016-07-01

    The measurement uncertainty is a parameter that represents the dispersion of the results obtained by a method of analysis. The estimation of measurement uncertainty in the determination of metals and semimetals is important to compare the results with limits defined by environmental legislation and conclude if the analytes are meeting the requirements. Therefore, the aim of this paper is present all the steps followed to estimate the uncertainty of the determination of amount of metals and semimetals in wastewater by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES). Measurement uncertainty obtained was between 4.6 and 12.2% in the concentration range of mg.L-1.

  4. Quantification of uncertainty in aboveground biomass estimates derived from small-footprint LiDAR data

    Science.gov (United States)

    Xu, Q.; Greenberg, J. A.; Li, B.; Ramirez, C.; Balamuta, J. J.; Evans, K.; Man, A.; Xu, Z.

    2015-12-01

    A promising approach to determining aboveground biomass (AGB) in forests comes through the use of individual tree crown delineation (ITCD) techniques applied to small-footprint LiDAR data. These techniques, when combined with allometric equations, can produce per-tree estimates of AGB. At this scale, AGB estimates can be quantified in a manner similar to how ground-based forest inventories are produced. However, these approaches have significant uncertainties that are rarely described in full. Allometric equations are often based on species-specific diameter-at-breast height (DBH) relationships, but neither DBH nor species can be reliably determined using remote sensing analysis. Furthermore, many approaches to ITCD only delineate trees appearing in the upper canopy so subcanopy trees are often missing from the inventories. In this research, we performed a propagation-of-error analysis to determine the spatially varying uncertainties in AGB estimates at the individual plant and stand level for a large collection of LiDAR acquisitions covering a large portion of California. Furthermore, we determined the relative contribution of various aspects of the analysis towards the uncertainty, including errors in the ITCD results, the allometric equations, the taxonomic designation, and the local biophysical environment. Watershed segmentation was used to obtain the preliminary crown segments. Lidar points within the preliminary segments were extracted to form profiling data of the segments, and then mode detection algorithms were applied to identify the tree number and tree heights within each segment. As part of this analysis, we derived novel "remote sensing aware" allometric equations and their uncertainties based on three-dimensional morphological metrics that can be accurately derived from LiDAR data.

  5. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  6. Using the Community Land Model to Assess Uncertainty in Basin Scale GRACE-Based Groundwater Estimates

    Science.gov (United States)

    Swenson, S. C.; Lawrence, D. M.

    2015-12-01

    One method for interpreting the variability in total water storage observed by GRACE is to partition the integrated GRACE measurement into its component storage reservoirs based on information provided by hydrological models. Such models, often designed to be used in couple Earth System models, simulate the stocks and fluxes of moisture through the land surface and subsurface. One application of this method attempts to isolate groundwater changes by removing modeled surface water, snow, and soil moisture changes from GRACE total water storage estimates. Human impacts on groundwater variability can be estimated by further removing model estimates of climate-driven groundwater changes. Errors in modeled water storage components directly affect the residual groundwater estimates. Here we examine the influence of model structure and process representation on soil moisture and groundwater uncertainty using the Community Land Model, with a particular focus on basins in the western U.S.

  7. Estimation and Uncertainty Analysis of Flammability Properties of Chemicals using Group-Contribution Property Models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence......, there is no information about the reliability of the data. Furthermore, the global optimality of the GC parameters estimation is often not ensured. In this research project flammability-related property data, like LFL and UFL, are estimated using the Marrero and Gani group contribution method (MG method). In addition...... the group contribution in three levels: The contributions from a specific functional group (1st order parameters), from polyfunctional (2nd order parameters) as well as from structural groups (3rd order parameters). The latter two classes of GC factors provide additional structural information beside...

  8. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    Science.gov (United States)

    Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl

    2017-04-01

    A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  9. A Strategy to Estimate the Systematic Uncertainty of Eddy Covariance Fluxes due to the Post-field Raw Data Processing

    Science.gov (United States)

    Sabbatini, Simone; Fratini, Gerardo; Fidaleo, Marcello; Papale, Dario

    2017-04-01

    the cumulate fluxes is correlated to its magnitude according to a power function. (v) The number of processing runs can be safely reduced to 32, and in most cases up to 16. Further reductions are possible, especially to roughly quantify the uncertainty, but tend to oversaturate the design, resulting in aliases between factors and interactions and making very difficult to understand their importance. Based on those results, we suggest that the systematic uncertainty of EC measurements from the post-field raw data processing can be estimated with one of the following methods (in order of increasing accuracy): (i) applying a power function to a single value of the yearly cumulate; (ii) from a combination of 4 different processing options: '2D rotations' and 'planar fit' for CR and 'block average' and 'linear detrending' for TR. (iii) Performing a fractional factorial analysis of 32 (16) combinations of different processing options. The increase in operational power of computers allows, and will allow even more in the future, to run more parallel routines in acceptable time.

  10. Influence of parameter estimation uncertainty in Kriging: Part 2 - Test and case study applications

    Directory of Open Access Journals (Sweden)

    E. Todini

    2001-01-01

    Full Text Available The theoretical approach introduced in Part 1 is applied to a numerical example and to the case of yearly average precipitation estimation over the Veneto Region in Italy. The proposed methodology was used to assess the effects of parameter estimation uncertainty on Kriging estimates and on their estimated error variance. The Maximum Likelihood (ML estimator proposed in Part 1, was applied to the zero mean deviations from yearly average precipitation over the Veneto Region in Italy, obtained after the elimination of a non-linear drift with elevation. Three different semi-variogram models were used, namely the exponential, the Gaussian and the modified spherical, and the relevant biases as well as the increases in variance have been assessed. A numerical example was also conducted to demonstrate how the procedure leads to unbiased estimates of the random functions. One hundred sets of 82 observations were generated by means of the exponential model on the basis of the parameter values identified for the Veneto Region rainfall problem and taken as characterising the true underlining process. The values of parameter and the consequent cross-validation errors, were estimated from each sample. The cross-validation errors were first computed in the classical way and then corrected with the procedure derived in Part 1. Both sets, original and corrected, were then tested, by means of the Likelihood ratio test, against the null hypothesis of deriving from a zero mean process with unknown covariance. The results of the experiment clearly show the effectiveness of the proposed approach. Keywords: yearly rainfall, maximum likelihood, Kriging, parameter estimation uncertainty

  11. Estimating basic wood density and its uncertainty for Pinus densiflora in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Jung Kee Pyo

    2012-05-01

    Full Text Available According to the Intergovernmental Panel on Climate Change(IPCC guidelines, uncertainty assessment is an important aspect of a greenhouse gas inventory, and effort should be made to incorporate it into the reporting. The goal of this study was to estimate basic wood density (BWD and its uncertainty for Pinus densiflora (Siebold & Zucc. in Korea. In this study, P. densiflora forests throughout the country were divided into two regional variants, which were the Gangwon region variant, distributed on the northeastern part of the country, and the central region variant. A total of 36 representative sampling plots were selected in both regions to collect sampletrees for destructive sampling. The trees were selected considering the distributions of tree age and diameter at breast height. Hypothesis testing was carried out to test the BWD differences between two age groups, i.e. age ≥ 20 and < 20, and differences between the two regions. The test suggested that there was no statistically significant difference between the two age classes. On the other hand, it is suggested a strong evidence of a statistically significant difference between regions. The BWD and its uncertainty were0.418 g/cm3 and 11.9% for the Gangwon region, whereas they were 0.471g/ cm3 and 3.8% for the central region. As a result, the estimated BWD for P.densiflora was more precise than the value provided by the IPCC guidelines.

  12. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    Science.gov (United States)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  13. Estimation of the uncertainty in water level forecasts at ungauged locations using Quantile Regression

    Science.gov (United States)

    Roscoe, K. L.; Weerts, A. H.

    2012-04-01

    Water level predictions in rivers are used by operational managers to make water management decisions. Such decisions can concern water routing in times of drought, operation of weirs, and actions for flood protection, such as evacuation. Understanding the uncertainty in the predictions can help managers make better-informed decisions. Conditional Quantile Regression is a method that can be used to determine the uncertainty in forecasted water levels by providing an estimate of the probability density function of the error in the prediction conditional on the forecasted water level. To derive this relationship, a series of forecasts and errors in the forecasts (residuals) are required. Thus, conditional quantile regressions can be derived for locations where both observations and forecasts are available. However, 1D-hydraulic models that are used for operational forecasting produce forecasts at intermediate points where no measurements are available but for which predictive uncertainty estimates are also desired for decision making. The objective of our study is to test if interpolation methods can be used to adequately estimate conditional quantile regressions at these in-between locations. For this purpose, five years of hindcasts were used at seven stations along the IJssel River in the Netherlands. Residuals in water level hindcasts were interpolated at the five in-between lying stations. The interpolation was based solely on distance and the interpolated residuals were compared to the measured residuals at stations at the in-between locations. The resulting interpolated residuals estimated the measured residuals well, especially for longer lead times. Quantile regression was then carried out using the series of forecasts and interpolated residuals at the in-between stations. The interpolated quantile regressions were compared with regressions calibrated using the actual residuals at the in-between stations. Results show that even a simple interpolation based

  14. Identifying Drivers of Variability and Uncertainty in Lake Metabolism Estimates across the Continent

    Science.gov (United States)

    Roehm, C. L.; Lunch, C. K.; Hanson, P. C.; Solomon, C.

    2013-12-01

    The National Ecological Observatory Network (NEON) is designed to gather and synthesize data on the impacts of climate change, land use change and invasive species on natural resources and biodiversity. Standardized data will be collected over 30 years from 106 aquatic and terrestrial sites across 20 domains in the U.S. using a combination of in-situ instrumentation and field observational sampling. The data will be freely available to the public on an open-access web portal. Ensuring the collection and dissemination of high-quality data across highly variable ecosystems using standardized methods is a priority for NEON. Defining the level of data accuracy and uncertainties associated with data collection and interpretation, and the propagation of such errors in the creation of higher level data products is, nonetheless, a primary challenge. Eight of the 36 NEON aquatic sites will be kitted with profiling buoys that will measure, continuously, a suite of water quality and meteorological parameters. These data will enable lake metabolism estimates. The metabolic balance of lakes is defined as the balance between photosynthetic carbon uptake as gross primary production (GPP) and respiration (R). Estimates of GPP and R can be inferred from continuous high-frequency dissolved oxygen data along with other water quality and meteorological parameters. Metabolism influences many critical characteristics of lakes at both the ecosystem and the landscape level. However, due to the complex suite of processes and interactions that drive metabolism, quantifying lake metabolism and associated uncertainties remains a challenge. The identification and application of a suite of models and techniques may improve our ability to discern variability in the data that stems from methodological noise, uncertainty in estimates, or actual ecological processes. This poster will highlight challenges and opportunities associated with obtaining high accuracy, long-term estimates of lake

  15. Group-contribution based property estimation and uncertainty analysis for flammability-related properties.

    Science.gov (United States)

    Frutiger, Jérôme; Marcarie, Camille; Abildskov, Jens; Sin, Gürkan

    2016-11-15

    This study presents new group contribution (GC) models for the prediction of Lower and Upper Flammability Limits (LFL and UFL), Flash Point (FP) and Auto Ignition Temperature (AIT) of organic chemicals applying the Marrero/Gani (MG) method. Advanced methods for parameter estimation using robust regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95%-confidence interval of the prediction. Compared to existing models the developed ones have a higher accuracy, are simple to apply and provide uncertainty information on the calculated prediction. The average relative error and correlation coefficient are 11.5% and 0.99 for LFL, 15.9% and 0.91 for UFL, 2.0% and 0.99 for FP as well as 6.4% and 0.76 for AIT. Moreover, the temperature-dependence of LFL property was studied. A compound specific proportionality constant (K(LFL)) between LFL and temperature is introduced and an MG GC model to estimate K(LFL) is developed. Overall the ability to predict flammability-related properties including the corresponding uncertainty of the prediction can provide important information for a qualitative and quantitative safety-related risk assessment studies.

  16. Developing first time-series of land surface temperature from AATSR with uncertainty estimates

    Science.gov (United States)

    Ghent, Darren; Remedios, John

    2013-04-01

    Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3

  17. Forensic Entomology: Evaluating Uncertainty Associated With Postmortem Interval (PMI) Estimates With Ecological Models.

    Science.gov (United States)

    Faris, A M; Wang, H-H; Tarone, A M; Grant, W E

    2016-05-31

    Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    CERN Document Server

    Köhlinger, F; Eriksen, M

    2015-01-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is signifi...

  19. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    Science.gov (United States)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  20. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    CERN Document Server

    Moore, Christopher J

    2014-01-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this work a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalised over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform...

  1. Bayesian Mass Estimates of the Milky Way: Including measurement uncertainties with hierarchical Bayes

    CERN Document Server

    Eadie, Gwendolyn; Harris, William

    2016-01-01

    We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie, Harris, & Widrow (2015) and Eadie & Harris (2016) and builds upon the preliminary reports by Eadie et al (2015a,c). The method uses a distribution function $f(\\mathcal{E},L)$ to model the galaxy and kinematic data from satellite objects such as globular clusters to trace the Galaxy's gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie & Harris (2016), and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and in...

  2. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    Science.gov (United States)

    Charonko, John J.; Vlachos, Pavlos P.

    2013-06-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.

  3. Uncertainty Estimates of NASA Satellite LST over the Greenland and Antarctic Plateau: 2003-2015

    Science.gov (United States)

    Knuteson, R.; Borbas, E. E.; Burgess, G.

    2015-12-01

    Jin and Dickinson (2010) identify three reasons why LST has not been adopted as a climate variable. Paraphrasing the authors, the three roadblocks for use of satellite LST products in climate studies are; 1) unknown accuracy (What are surface emissivity and atmospheric correction uncertainties?)2) spatial scale ambiguity (Are satellite footprints too large to be physically meaningful?)3) lack of consistency over decadal time scales (How far backward/forward can we go in time?). These issues apply particularly to the cryosphere where the lack of surface measurement sites make the proper use of satellite observations critical for monitoring climate change. This paper will address each of these three issues but with a focus on the high and dry Greenland and Antarctic plateaus and the contrast in trends between the two. Recent comparisons of MODIS LST products with AIRS version 6 LST products show large differences over Greenland (Lee et al. 2014). In this paper we take the logical next step of creating a bottoms up uncertainty budget for a new synergistic AIRS/MODIS LST product for ice and snow conditions. This new product will address the issue of unknown accuracy by providing a local LST uncertainty along with each estimate of surface temperature. The combination of the high spatial resolution of the MODIS and the high spectral resolution of the AIRS observations of radiance allow the combination of the two sensors to provide information with lower uncertainty than what is possible from the current separate operational products. The issue of surface emissivity and atmospheric correction uncertainties will be addressed explicitly using spectrally resolved models that cover the infrared region. The issue of spatial scale ambiguity is overcome by creating a classification of the results based on the spatial homogenity of surface temperatures. The issue of lack of consistency over long time scales is addressed by demonstrating an algorithm using collocated NASA MODIS

  4. Evaluation of satellite and reanalysis-based global net surface energy flux and uncertainty estimates

    Science.gov (United States)

    Allan, Richard; Liu, Chunlei

    2017-04-01

    The net surface energy flux is central to the climate system yet observational limitations lead to substantial uncertainty (Trenberth and Fasullo, 2013; Roberts et al., 2016). A combination of satellite-derived radiative fluxes at the top of atmosphere (TOA) adjusted using the latest estimation of the net heat uptake of the Earth system, and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis are used to estimate surface energy flux globally (Liu et al., 2015). Land surface fluxes are adjusted through a simple energy balance approach using relations at each grid point with the consideration of snowmelt to improve regional realism. The energy adjustment is redistributed over the oceans using a weighting function to avoid meridional discontinuities. Uncertainties in surface fluxes are investigated using a variety of approaches including comparison with a range of atmospheric reanalysis input data and products. Zonal multiannual mean surface flux uncertainty is estimated to be less than 5 Wm-2 but much larger uncertainty is likely for regional monthly values. The meridional energy transport is calculated using the net surface heat fluxes estimated in this study and the result shows better agreement with observations in Atlantic than before. The derived turbulent fluxes (difference between the net heat flux and the CERES EBAF radiative flux at surface) also have good agreement with those from OAFLUX dataset and buoy observations. Decadal changes in the global energy budget and the hemisphere energy imbalances are quantified and present day cross-equator heat transports is re-evaluated as 0.22±0.15 PW southward by the atmosphere and 0.32±0.16 PW northward by the ocean considering the observed ocean heat sinks (Roemmich et al., 2006) . Liu et al. (2015) Combining satellite observations and reanalysis energy transports to estimate global net surface energy fluxes 1985-2012. J. Geophys. Res., Atmospheres. ISSN 2169-8996 doi: 10.1002/2015JD

  5. Importance of tree basic density in biomass estimation and associated uncertainties

    DEFF Research Database (Denmark)

    Njana, Marco Andrew; Meilby, Henrik; Eid, Tron

    2016-01-01

    Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground...... of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes...... and examine uncertainties in estimation of tree biomass using indirect methods. Methods This study focused on three dominant mangrove species (Avicennia marina (Forssk.) Vierh, Sonneratia alba J. Smith and Rhizophora mucronata Lam.) in Tanzania. A total of 120 trees were destructively sampled for aboveground...

  6. Software Development Effort Estimation using Fuzzy Bayesian Belief Network with COCOMO II

    Directory of Open Access Journals (Sweden)

    B.Chakraborty

    2015-01-01

    Full Text Available Software development has always been characterized by some metrics. One of the greatest challenges for software developers lies in predicting the development effort for a software system which is based on developer abilities, size, complexity and other metrics. Several algorithmic cost estimation models such as Boehm?s COCOMO, Albrecht's' Function Point Analysis, Putnam?s SLIM, ESTIMACS etc. are available but every model has its own pros and cons in estimating development cost and effort. Most common reason being project data which is available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. In this paper, Bayesian probabilistic model has been explored to overcome the problems of uncertainty and imprecision resulting in improved process of software development effort estimation. This paper considers a software estimation approach using six key cost drivers in COCOMO II model. The selected cost drivers are the inputs to systems. The concept of Fuzzy Bayesian Belief Network (FBBN has been introduced to improve the accuracy of the estimation. Results shows that the value of MMRE (Mean of Magnitude of Relative Error and PRED obtained by means of FBBN is much better as compared to the MMRE and PRED of Fuzzy COCOMO II models. The validation of results was carried out on NASA-93 dem COCOMO II dataset.

  7. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration.

    Science.gov (United States)

    Yang, Ming; Zhu, X Ronald; Park, Peter C; Titt, Uwe; Mohan, Radhe; Virshup, Gary; Clayton, James E; Dong, Lei

    2012-07-07

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0-3.4%, primarily because soft tissue is the dominant tissue type in the human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction.

  8. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model for the Greater London

    Directory of Open Access Journals (Sweden)

    Ali P. Yunus

    2016-04-01

    Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

  9. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  10. Estimating amplitude uncertainty for normalized ambient seismic noise cross-correlation with examples from southern California

    Science.gov (United States)

    Liu, X.; Beroza, G. C.; Ben-Zion, Y.

    2016-12-01

    We estimate the frequency-dependent amplitude error of ambient noise cross-correlations based on the method of Liu et al. (2016) for different normalizations. We compute the stacked cross spectrum of noise recorded at station pairs in southern California by averaging the cross spectrum of evenly spaced windows of the same length, but offset in time. Windows with signals (e.g. earthquakes) contaminating the ambient seismic noise are removed as statistical outliers. Standard errors of the real and imaginary parts of the stacked cross-spectrum are estimated assuming each window is independent. The autocorrelation of the sequence of cross-spectrum values at a given frequency obtained from different windows are used to test the independence of cross-spectrum values in neighboring time windows. For frequencies below 0.2 Hz, we find temporal correlation in the noise data. We account for temporal correlation in computation of errors using a block bootstrap resampling method. The stacked cross-spectrum and associated amplitude are computed under different normalization methods including deconvolution and whitening applied before or after ensemble average of cross-spectrum values. We estimate the amplitude errors based on error propagation from errors of stacked cross-spectrum and verified by bootstrap method. We propose to use this characterization of amplitude uncertainty to constrain uncertainties in ground motion predictions based on ambient-field observations.

  11. An Assessment of Uncertainty in Remaining Life Estimation for Nuclear Structural Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ramuhalli, Pradeep; Griffin, Jeffrey W.; Fricke, Jacob M.; Bond, Leonard J.

    2012-12-01

    In recent years, several operating US light-water nuclear power reactors (LWRs) have moved to extended-life operations (from 40 years to 60 years), and there is interest in the feasibility of extending plant life to 80 years. Operating experience suggests that material degradation of structural components in LWRs (such as the reactor pressure vessel) is expected to be the limiting factor for safe operation during extended life. Therefore, a need exists for assessing the condition of LWR structural components and determining its remaining useful life (RUL). The ability to estimate RUL of degraded structural components provides a basis for determining safety margins (i.e., whether safe operation over some pre-determined time horizon is possible), and scheduling degradation management activities (such as potentially modifying operating conditions to limit further degradation growth). A key issue in RUL estimation is calculation of uncertainty bounds, which are dependent on current material state, as well as past and future stressor levels (such as time-at-temperature, pressure, and irradiation). This paper presents a preliminary empirical investigation into the uncertainty of RUL estimates for nuclear structural materials.

  12. Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks.

    Science.gov (United States)

    Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng

    2016-05-30

    Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity.

  13. Quantifying and Reducing Uncertainties in Estimating OMI Tropospheric Column NO2 Trend over The United States

    Science.gov (United States)

    Smeltzer, C. D.; Wang, Y.; Boersma, F.; Celarier, E. A.; Bucsela, E. J.

    2013-12-01

    We investigate the effects of retrieval radiation schemes and parameters on trend analysis using tropospheric nitrogen dioxide (NO2) vertical column density (VCD) measurements over the United States. Ozone Monitoring Instrument (OMI) observations from 2005 through 2012 are used in this analysis. We investigated two radiation schemes, provided by National Aeronautics and Space Administration (NASA TOMRAD) and Koninklijk Nederlands Meteorologisch Instituut (KNMI DAK). In addition, we analyzed trend dependence on radiation parameters, including surface albedo and viewing geometry. The cross-track mean VCD average difference is 10-15% between the two radiation schemes in 2005. As the OMI anomaly developed and progressively worsens, the difference between the two schemes becomes larger. Furthermore, applying surface albedo measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) leads to increases of estimated NO2 VCD trends over high-emission regions. We find that the uncertainties of OMI-derived NO2 VCD trends can be reduced by up to a factor of 3 by selecting OMI cross-track rows on the basis of their performance over the ocean [see abstract figure]. Comparison of OMI tropospheric VCD trends to those estimated based on the EPA surface NO2 observations indicate using MODIS surface albedo data and a more narrow selection of OMI cross-track rows greatly improves the agreement of estimated trends between satellite and surface data. This figure shows the reduction of uncertainty in OMI NO2 trend by selecting OMI cross-track rows based on the performance over the ocean. With this technique, uncertainties within the seasonal trend may be reduced by a factor of 3 or more (blue) compared with only removing the anomalous rows: considering OMI cross-track rows 4-24 (red).

  14. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  15. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    Science.gov (United States)

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  16. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    CERN Document Server

    Mosher, J; Kessler, R; Astier, P; Marriner, J; Betoule, M; Sako, M; El-Hage, P; Biswas, R; Pain, R; Kuhlmann, S; Regnault, N; Frieman, J A; Schneider, D P

    2014-01-01

    We use simulated SN Ia samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and the bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: 120 low-redshift (z < 0.1) SNe Ia, 255 SDSS SNe Ia (z < 0.4), and 290 SNLS SNe Ia (z <= 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (winput - wrecovered) ranging from -0.005 +/- 0.012 to -0.024 +/- 0.010. These biases a...

  17. Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data

    Science.gov (United States)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.

    2013-01-01

    The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.

  18. Novel method for incorporating model uncertainties into gravitational wave parameter estimates.

    Science.gov (United States)

    Moore, Christopher J; Gair, Jonathan R

    2014-12-19

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  19. Cost implications of uncertainty in CO2 storage resource estimates: A review

    Science.gov (United States)

    Anderson, Steven T.

    2017-01-01

    Carbon capture from stationary sources and geologic storage of carbon dioxide (CO2) is an important option to include in strategies to mitigate greenhouse gas emissions. However, the potential costs of commercial-scale CO2 storage are not well constrained, stemming from the inherent uncertainty in storage resource estimates coupled with a lack of detailed estimates of the infrastructure needed to access those resources. Storage resource estimates are highly dependent on storage efficiency values or storage coefficients, which are calculated based on ranges of uncertain geological and physical reservoir parameters. If dynamic factors (such as variability in storage efficiencies, pressure interference, and acceptable injection rates over time), reservoir pressure limitations, boundaries on migration of CO2, consideration of closed or semi-closed saline reservoir systems, and other possible constraints on the technically accessible CO2 storage resource (TASR) are accounted for, it is likely that only a fraction of the TASR could be available without incurring significant additional costs. Although storage resource estimates typically assume that any issues with pressure buildup due to CO2 injection will be mitigated by reservoir pressure management, estimates of the costs of CO2 storage generally do not include the costs of active pressure management. Production of saline waters (brines) could be essential to increasing the dynamic storage capacity of most reservoirs, but including the costs of this critical method of reservoir pressure management could increase current estimates of the costs of CO2 storage by two times, or more. Even without considering the implications for reservoir pressure management, geologic uncertainty can significantly impact CO2 storage capacities and costs, and contribute to uncertainty in carbon capture and storage (CCS) systems. Given the current state of available information and the scarcity of (data from) long-term commercial-scale CO2

  20. Cost Implications of Uncertainty in CO{sub 2} Storage Resource Estimates: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Steven T., E-mail: sanderson@usgs.gov [National Center, U.S. Geological Survey (United States)

    2017-04-15

    Carbon capture from stationary sources and geologic storage of carbon dioxide (CO{sub 2}) is an important option to include in strategies to mitigate greenhouse gas emissions. However, the potential costs of commercial-scale CO{sub 2} storage are not well constrained, stemming from the inherent uncertainty in storage resource estimates coupled with a lack of detailed estimates of the infrastructure needed to access those resources. Storage resource estimates are highly dependent on storage efficiency values or storage coefficients, which are calculated based on ranges of uncertain geological and physical reservoir parameters. If dynamic factors (such as variability in storage efficiencies, pressure interference, and acceptable injection rates over time), reservoir pressure limitations, boundaries on migration of CO{sub 2}, consideration of closed or semi-closed saline reservoir systems, and other possible constraints on the technically accessible CO{sub 2} storage resource (TASR) are accounted for, it is likely that only a fraction of the TASR could be available without incurring significant additional costs. Although storage resource estimates typically assume that any issues with pressure buildup due to CO{sub 2} injection will be mitigated by reservoir pressure management, estimates of the costs of CO{sub 2} storage generally do not include the costs of active pressure management. Production of saline waters (brines) could be essential to increasing the dynamic storage capacity of most reservoirs, but including the costs of this critical method of reservoir pressure management could increase current estimates of the costs of CO{sub 2} storage by two times, or more. Even without considering the implications for reservoir pressure management, geologic uncertainty can significantly impact CO{sub 2} storage capacities and costs, and contribute to uncertainty in carbon capture and storage (CCS) systems. Given the current state of available information and the

  1. Uncertainty introduced by flood frequency analysis in the estimation of climate change impacts on flooding

    Science.gov (United States)

    Lawrence, Deborah

    2016-04-01

    Potential changes in extreme flooding under a future climate are of much interest in climate change adaptation work, and estimates for high flows with long return periods are often based on an application of flood frequency analysis methods. The uncertainty introduced by this estimation is, however, only rarely considered when assessing changes in flood magnitude. In this study, an ensemble of hydrological projections for each of 115 catchments distributed across Norway is analysed to derive an estimate for the percentage change in the magnitude of the 200-year flood under a future climate. This is the return level used for flood hazard mapping in Norway. The ensemble of projections is based on climate data from 10 EUROCORDEX GCM/RCM combinations, two bias correction methods (empirical quantile mapping and double gamma function), and 25 alternative parameterisations of the HBV hydrological model. For each hydrological simulation, the annual maximum series is used to estimate the 200-year flood for the reference period, 1971-2000 and a future period, 2071-2100, based on two and three-parameter GEV distributions. In addition, bootstrap resampling is used to estimate the 95% confidence levels for the extreme value estimates, and this range is incorporated into the ensemble estimates for each catchment. As has been shown in previous work based on earlier climate projections, there are large regional differences in the projected changes in the 200-year flood across Norway, with median ensemble projections ranging from 44% to +56% for the daily-averaged flood magnitude. These differences reflect the relative importance of rainfall vs. snowmelt as the dominant flood generating process in different regions, at differing altitudes and as a function of catchment area, in addition to dominant storm tracks. Variance decomposition is used to assess the relative contributions of the following components to the total spread (given by the 5 to 95% range) in the ensemble for each

  2. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    Science.gov (United States)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  3. A Systematic Methodology for Uncertainty Analysis of Group Contribution Based and Atom Connectivity Index Based Models for Estimation of Properties of Pure Components

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Sin, Gürkan

    One of the most widely employed group contribution method for estimation of properties of pure components is the Marrero and Gani (MG) method. For the given component whose molecular structure is not completely described by any of the available groups, group contribution+ method (combined MG method...... and atomic connectivity index method) has been employed to create the missing groups and predict their contributions through the regressed contributions of connectivity indices. The objective of this work is to develop a systematic methodology to carry out uncertainty analysis of group contribution based...... and atom connectivity index based property prediction models. This includes: (i) parameter estimation using available MG based property prediction models and large training sets to determine improved group and atom contributions; and (ii) uncertainty analysis to establish statistical information...

  4. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  5. Estimation of measurement uncertainty of pesticides, polychlorinated biphenyls and polyaromatic hydrocarbons in sediments by using gas chromatography-mass spectrometry.

    Science.gov (United States)

    Pindado Jiménez, Oscar; Pérez Pastor, Rosa Ma

    2012-04-29

    The evaluation of the uncertainty associated to analytical methods is essential in order to demonstrate quality of a result. However, there is often lack of information about uncertainty of methods to estimate persistent organic pollutants concentration in complex matrix. Current work has thoroughly evaluated uncertainty associated to quantification of several organochloride pesticides, PCBs and PAHs in sediments. A discussion of the main contributions to the overall uncertainty is reported, allowing authors to establish the accuracy of results and plan future improvements. Combined uncertainties ranged between 5-9% (pesticides), 4-7% (PCBs) and 5-10% (PAHs), being uncertainty derived of calibration the main contribution. Also, the analytical procedure was validated analysing a standard reference material (IAEA-408).

  6. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  7. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    Science.gov (United States)

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  8. ESTIMATING UNCERTAINTY OF EMISSIONS INVENTORIES: WHAT HAS BEEN DONE/WHAT NEEDS TO BE DONE.

    Energy Technology Data Exchange (ETDEWEB)

    BENKOVITZ,C.M.

    1998-10-01

    Developing scientifically defensible quantitative estimates of the uncertainty of atmospheric emissions inventories has been a ''gleam in researchers' eyes'' since atmospheric chemical transport and transformation models (CTMs) started to be used to study ''air pollution''. Originally, the compilation of these inventories was done as part of the development and application of the models by researchers whose expertise usually did not include the ''art'' of emissions estimations. In general, the smaller the effort spent on compiling the inventories the more effort could be placed on the model development, application and analysis. Yet model results are intimately tied to the accuracy of the emissions data; no model, however accurately the atmospheric physical and chemical processes are represented, will give reliable representation of air concentrations if the emissions data are flawed.

  9. Modeling the potential area of occupancy at fine resolution may reduce uncertainty in species range estimates

    DEFF Research Database (Denmark)

    Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo

    2012-01-01

    resolution. To illustrate the ability of fine-resolution species distribution models for obtaining new measures of species ranges and their impact in conservation planning, we estimate the potential AOO of an endangered species in alpine environments. We use field occurrences of relict Empetrum nigrum......Area of Occupancy (AOO), is a measure of species geographical ranges commonly used for species red listing. In most cases, AOO is estimated using reported localities of species distributions at coarse grain resolution, providing measures subjected to uncertainties of data quality and spatial...... Area (MPA). As defined here, the potential AOO provides spatially-explicit measures of species ranges which are permanent in the time and scarcely affected by sampling bias. The overestimation of these measures may be reduced using higher thresholds of habitat suitability, but standard rules as the MPA...

  10. Estimating basic wood density and its uncertainty for Pinus densiflora in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Jung Kee Pyo

    2012-06-01

    Full Text Available According to the Intergovernmental Panel on Climate Change (IPCC guidelines, uncertainty assessment is an important aspect of a greenhouse gas inventory, and effort should be made to incorporate it into the reporting. The goal of this study was to estimate basic wood density (BWD and its uncertainty for Pinus densiflora (Siebold & Zucc. in Korea. In this study, P. densiflora forests throughout the country were divided into two regional variants, which were the Gangwon region variant, distributed on the northeastern part of the country, and the central region variant. A total of 36 representative sampling plots were selected in both regions to collect sample trees for destructive sampling. The trees were selected considering the distributions of tree age and diameter at breast height. Hypothesis testing was carried out to test the BWD differences between two age groups, i.e. age over 20 and less than 20, and differences between the two regions. The test suggested that there was no statistically significant difference between the two age classes. On the other hand, it is suggested a strong evidence of a statistically significant difference between regions. The BWD and its uncertainty were 0.418 g/cm3 and 11.9% for the Gangwon region, whereas they were 0.471g/cm3 and 3.8% for the central region. As a result, the estimated BWD for P. densiflora was more precise than the value provided by the IPCC guidelines.

  11. Estimation of Model and Parameter Uncertainty For A Distributed Rainfall-runoff Model

    Science.gov (United States)

    Engeland, K.

    The distributed rainfall-runoff model Ecomag is applied as a regional model for nine catchments in the NOPEX area in Sweden. Ecomag calculates streamflow on a daily time resolution. The posterior distribution of the model parameters is conditioned on the observed streamflow in all nine catchments, and calculated using Bayesian statistics. The distribution is estimated by Markov chain Monte Carlo (MCMC). The Bayesian method requires a definition of the likelihood of the parameters. Two alter- native formulations are used. The first formulation is a subjectively chosen objective function describing the goodness of fit between the simulated and observed streamflow as it is used in the GLUE framework. The second formulation is to use a more statis- tically correct likelihood function that describes the simulation errors. The simulation error is defined as the difference between log-transformed observed and simulated streamflows. A statistical model for the simulation errors is constructed. Some param- eters are dependent on the catchment, while others depend on climate. The statistical and the hydrological parameters are estimated simultaneously. Confidence intervals, due to the uncertainty of the Ecomag parameters, for the simulated streamflow are compared for the two likelihood functions. Confidence intervals based on the statis- tical model for the simulation errors are also calculated. The results indicate that the parameter uncertainty depends on the formulation of the likelihood function. The sub- jectively chosen likelihood function gives relatively wide confidence intervals whereas the 'statistical' likelihood function gives more narrow confidence intervals. The statis- tical model for the simulation errors indicates that the structural errors of the model are as least as important as the parameter uncertainty.

  12. Towards national-scale greenhouse gas emissions evaluation with robust uncertainty estimates

    Science.gov (United States)

    Rigby, Matthew; Swallow, Ben; Lunt, Mark; Manning, Alistair; Ganesan, Anita; Stavert, Ann; Stanley, Kieran; O'Doherty, Simon

    2016-04-01

    Through the Deriving Emissions related to Climate Change (DECC) network and the Greenhouse gAs Uk and Global Emissions (GAUGE) programme, the UK's greenhouse gases are now monitored by instruments mounted on telecommunications towers and churches, on a ferry that performs regular transects of the North Sea, on-board a research aircraft and from space. When combined with information from high-resolution chemical transport models such as the Met Office Numerical Atmospheric dispersion Modelling Environment (NAME), these measurements are allowing us to evaluate emissions more accurately than has previously been possible. However, it has long been appreciated that current methods for quantifying fluxes using atmospheric data suffer from uncertainties, primarily relating to the chemical transport model, that have been largely ignored to date. Here, we use novel model reduction techniques for quantifying the influence of a set of potential systematic model errors on the outcome of a national-scale inversion. This new technique has been incorporated into a hierarchical Bayesian framework, which can be shown to reduce the influence of subjective choices on the outcome of inverse modelling studies. Using estimates of the UK's methane emissions derived from DECC and GAUGE tall-tower measurements as a case study, we will show that such model systematic errors have the potential to significantly increase the uncertainty on national-scale emissions estimates. Therefore, we conclude that these factors must be incorporated in national emissions evaluation efforts, if they are to be credible.

  13. Bayesian Mass Estimates of the Milky Way: Including Measurement Uncertainties with Hierarchical Bayes

    Science.gov (United States)

    Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.

    2017-02-01

    We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}ȯ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}ȯ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.

  14. High spatial resolution Land Surface Temperature estimation over urban areas with uncertainty indices

    Science.gov (United States)

    Mitraka, Zina; Lazzarini, Michele; Doxani, Georgia; Del Frate, Fabio; Ghedira, Hosni

    2014-05-01

    Land Surface Temperature (LST) is a key variable for studying land surface processes and interactions with the atmosphere and it is listed in the Earth System Data Records (ESDRs) identified by international organizations like Global Climate Observing System. It is a valuable source of information for a range of topics in earth sciences and essential for urban climatology studies. Detailed, frequent and accurate LST mapping may support various urban applications, like the monitoring of urban heat island. Currently, no spaceborne instruments provide frequent thermal imagery at high spatial resolution, thus there is a need for synergistic algorithms that combine different kinds of data for LST retrieval. Moreover, knowing the confidence level of any satellite-derived product is highly important to the users, especially when referred to the urban environment, which is extremely heterogenic. The developed method employs spatial-spectral unmixing techniques for improving the spatial resolution of thermal measurements, combines spectral library information for emissivity estimation and applies a split-window algorithm to estimate LST with an uncertainty estimation inserted in the final product. A synergistic algorithm that utilizes the spatial information provided by visible and near-infrared measurements with more frequent low resolution thermal measurements provides excellent means for high spatial resolution LST estimation. Given the low spatial resolution of thermal infrared sensors, the measured radiation is a combination of radiances of different surface types. High spatial resolution information is used to quantify the different surface types in each pixel and then the measured radiance of each pixel is decomposed. The several difficulties in retrieving LST from space measurements, mainly related to the temperature-emissivity coupling and the atmospheric contribution to the thermal measurements, and the measurements themselves, introduce uncertainties in the final

  15. Estimation of uncertainty bounds for the future performance of a power plant

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2009-01-01

    on recent data and the other is based on operating points as well. The third proposed scheme uses dynamical models of the prediction uncertainties, like in H-infinity-control. The proposed schemes are subsequently applied to experimental data from a coal-fired power plant. {Two sets of data from an actual....... In addition, the plant was simulated operating under the same conditions with additional large disturbances. These simulations were used to investigate  the robustness and conservatism of the proposed schemes. In this test {Scheme I and II} failed, while {Scheme III} succeeded the test....

  16. Estimation of Properties of Pure Components Using Improved Group-Contribution+ (GC+) Based Models and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    they must be estimated. Predictive methods such as the group-contribution+ (GC+) method (combined group-contribution (GC) method and atom connectivity index (CI) method) are generally suitable to estimate the needed property values. For assessing the quality and reliability of the selected property...... the estimated property values using the GC+ approach, but also the uncertainties in the estimated property values. This feature allows one to evaluate the effects of these uncertainties on the product-process design calculations thereby contributing to better-informed and reliable engineering solutions....

  17. Application of the Nordtest method for "real-time" uncertainty estimation of on-line field measurement.

    Science.gov (United States)

    Näykki, Teemu; Virtanen, Atte; Kaukonen, Lari; Magnusson, Bertil; Väisänen, Tero; Leito, Ivo

    2015-10-01

    Field sensor measurements are becoming more common for environmental monitoring. Solutions for enhancing reliability, i.e. knowledge of the measurement uncertainty of field measurements, are urgently needed. Real-time estimations of measurement uncertainty for field measurement have not previously been published, and in this paper, a novel approach to the automated turbidity measuring system with an application for "real-time" uncertainty estimation is outlined based on the Nordtest handbook's measurement uncertainty estimation principles. The term real-time is written in quotation marks, since the calculation of the uncertainty is carried out using a set of past measurement results. There are two main requirements for the estimation of real-time measurement uncertainty of online field measurement described in this paper: (1) setting up an automated measuring system that can be (preferably remotely) controlled which measures the samples (water to be investigated as well as synthetic control samples) the way the user has programmed it and stores the results in a database, (2) setting up automated data processing (software) where the measurement uncertainty is calculated from the data produced by the automated measuring system. When control samples with a known value or concentration are measured regularly, any instrumental drift can be detected. An additional benefit is that small drift can be taken into account (in real-time) as a bias value in the measurement uncertainty calculation, and if the drift is high, the measurement results of the control samples can be used for real-time recalibration of the measuring device. The procedure described in this paper is not restricted to turbidity measurements, but it will enable measurement uncertainty estimation for any kind of automated measuring system that performs sequential measurements of routine samples and control samples/reference materials in a similar way as described in this paper.

  18. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    Science.gov (United States)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another "equivalent" sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The

  19. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  20. Uncertainty of Forest Biomass Estimates in North Temperate Forests Due to Allometry: Implications for Remote Sensing

    Directory of Open Access Journals (Sweden)

    Razi Ahmed

    2013-06-01

    Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.

  1. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    Science.gov (United States)

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  2. Uncertainty quantification techniques for population density estimates derived from sparse open source data

    Science.gov (United States)

    Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

    2013-05-01

    The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  3. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    Science.gov (United States)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  4. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  5. GPZ: non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshifts

    Science.gov (United States)

    Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.

    2016-10-01

    The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.

  6. Inverse Estimation of California Methane Emissions and Their Uncertainties using FLEXPART-WRF

    Science.gov (United States)

    Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Peischl, J.; Nowak, J. B.; Henze, D. K.; Bousserez, N.; Fischer, M. L.; Jeong, S.; Liu, Z.; Michelsen, H. A.; Santoni, G.; Daube, B. C.; Kort, E. A.; Frost, G. J.; Ryerson, T. B.; Wofsy, S. C.; Trainer, M.

    2015-12-01

    Methane (CH4) has a large global warming potential and mediates global tropospheric chemistry. In California, CH4 emissions estimates derived from "top-down" methods based on atmospheric observations have been found to be greater than expected from "bottom-up" population-apportioned national and state inventories. Differences between bottom-up and top-down estimates suggest that the understanding of California's CH4 sources is incomplete, leading to uncertainty in the application of regulations to mitigate regional CH4 emissions. In this study, we use airborne measurements from the California research at the Nexus of Air Quality and Climate Change (CalNex) campaign in 2010 to estimate CH4 emissions in the South Coast Air Basin (SoCAB), which includes California's largest metropolitan area (Los Angeles), and in the Central Valley, California's main agricultural and livestock management area. Measurements from 12 daytime flights, prior information from national and regional official inventories (e.g. US EPA's National Emission Inventory, the California Air Resources Board inventories, the Liu et al. Hybrid Inventory, and the California Greenhouse Gas Emissions Measurement dataset), and the FLEXPART-WRF transport model are used in our mesoscale Bayesian inverse system. We compare our optimized posterior CH4 inventory to the prior bottom-up inventories in terms of total emissions (Mg CH4/hr) and the spatial distribution of the emissions (0.1 degree), and quantify uncertainties in our posterior estimates. Our inversions show that the oil and natural gas industry (extraction, processing and distribution) is the main source accounting for the gap between top-down and bottom-up inventories over the SoCAB, while dairy farms are the largest CH4 source in the Central Valley. CH4 emissions of dairy farms in the San Joaquin Valley and variations of CH4 emissions in the rice-growing regions of Sacramento Valley are quantified and discussed. We also estimate CO and NH3 surface

  7. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  8. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the

  9. Uncertainty in urban flood damage assessment due to urban drainage modelling and depth-damage curve estimation.

    Science.gov (United States)

    Freni, G; La Loggia, G; Notaro, V

    2010-01-01

    Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly

  10. On-the-fly estimation strategy for uncertainty propagation in two-step Monte Carlo calculation for residual radiation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.

  11. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  12. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Science.gov (United States)

    Mittermaier, M. P.

    2008-05-01

    A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP) verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS) and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used. The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  13. Uncertainty in Estimation of Bioenergy Induced Lulc Change: Development of a New Change Detection Technique.

    Science.gov (United States)

    Singh, N.; Vatsavai, R. R.; Patlolla, D.; Bhaduri, B. L.; Lim, S. J.

    2015-12-01

    Recent estimates of bioenergy induced land use land cover change (LULCC) have large uncertainty due to misclassification errors in the LULC datasets used for analysis. These uncertainties are further compounded when data is modified by merging classes, aggregating pixels and change in classification methods over time. Hence the LULCC computed using these derived datasets is more a reflection of change in classification methods, change in input data and data manipulation rather than reflecting actual changes ion ground. Furthermore results are constrained by geographic extent, update frequency and resolution of the dataset. To overcome this limitation we have developed a change detection system to identify yearly as well as seasonal changes in LULC patterns. Our method uses hierarchical clustering which works by grouping objects into a hierarchy based on phenological similarity of different vegetation types. The algorithm explicitly models vegetation phenology to reduce spurious changes. We apply our technique on globally available Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI data at 250-meter resolution. We analyze 10 years of bi-weekly data to predict changes in the mid-western US as a case study. The results of our analysis are presented and its advantages over existing techniques are discussed.

  14. Accurate and robust estimation of phase error and its uncertainty of 50 GHz bandwidth sampling circuit

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper discusses the dependence of the phase error on the 50 GHz bandwidth oscilloscope's sampling circuitry- We give the definition of the phase error as the difference between the impulse responses of the NTN (nose-to-nose) estimate and the true response of the sampling circuit. We develop a method to predict the NTN phase response arising from the internal sampling circuitry of the oscilloscope. For the default sampling-circuit configuration that we examine, our phase error is approximately 7.03 at 50 GHz. We study the sensitivity of the oscilloscope's phase response to parametric changes in sampling-circuit component values. We develop procedures to quantify the sensitivity of the phase error to each component and to a combination of components that depend on the fractional uncertainty in each of the model parameters as the same value, 10%. We predict the upper and lower bounds of phase error, that is, we vary all of the circuit parameters simultaneously in such a way as to increase the phase error, and then vary all of the circuit parameters to decrease the phase error. Based on Type B evaluation, this method qualifies the impresses of all parameters of the sampling circuit and gives the value of standard uncertainty, 1.34. This result is developed at the first time and has important practical uses. It can be used for phase calibration in the 50 GHz bandwidth large signal network analyzers (LSNAs).

  15. Enhance accuracy in Software cost and schedule estimation by using "Uncertainty Analysis and Assessment" in the system modeling process

    CERN Document Server

    Vasantrao, Kardile Vilas

    2011-01-01

    Accurate software cost and schedule estimation are essential for software project success. Often it referred to as the "black art" because of its complexity and uncertainty, software estimation is not as difficult or puzzling as people think. In fact, generating accurate estimates is straightforward-once you understand the intensity of uncertainty and framework for the modeling process. The mystery to successful software estimation-distilling academic information and real-world experience into a practical guide for working software professionals. Instead of arcane treatises and rigid modeling techniques, this will guide highlights a proven set of procedures, understandable formulas, and heuristics that individuals and development teams can apply to their projects to help achieve estimation proficiency with choose appropriate development approaches In the early stage of software life cycle project manager are inefficient to estimate the effort, schedule, cost estimation and its development approach .This in tu...

  16. Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)

    2016-10-15

    In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.

  17. Uncertainties and Systematic Effects on the estimate of stellar masses in high z galaxies

    CERN Document Server

    Salimbeni, S; Giallongo, E; Grazian, A; Menci, N; Pentericci, L; Santini, P

    2009-01-01

    We discuss the uncertainties and the systematic effects that exist in the estimates of the stellar masses of high redshift galaxies, using broad band photometry, and how they affect the deduced galaxy stellar mass function. We use at this purpose the latest version of the GOODS-MUSIC catalog. In particular, we discuss the impact of different synthetic models, of the assumed initial mass function and of the selection band. Using Charlot & Bruzual 2007 and Maraston 2005 models we find masses lower than those obtained from Bruzual & Charlot 2003 models. In addition, we find a slight trend as a function of the mass itself comparing these two mass determinations with that from Bruzual & Charlot 2003 models. As consequence, the derived galaxy stellar mass functions show diverse shapes, and their slope depends on the assumed models. Despite these differences, the overall results and scenario remains unchanged. The masses obtained with the assumption of the Chabrier initial mass function are in average 0....

  18. Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing the Pyrgeometer Data (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Reda, I.; Stoffel, T.; Habte, A.

    2014-03-01

    The National Renewable Energy Laboratory (NREL) and the Atmospheric Radiation Measurement (ARM) Climate Research Facility work together in providing data from strategically located in situ measurement observatories around the world. Both work together in improving and developing new technologies that assist in acquiring high quality radiometric data. In this presentation we summarize the uncertainty estimates of the ARM data collected at the ARM Solar Infrared Radiation Station (SIRS), Sky Radiometers on Stand for Downwelling Radiation (SKYRAD), and Ground Radiometers on Stand for Upwelling Radiation (GNDRAD), which ultimately improve the existing radiometric data. Three studies are also included to show the difference between calibrating pyrgeometers (e.g., Eppley PIR) using the manufacturer blackbody versus the interim World Infrared Standard Group (WISG), a pyrgeometer aging study, and the sampling rate effect of correcting historical data.

  19. Summary of uncertainty estimation results for Hanford tank chemical and radionuclide inventories

    Energy Technology Data Exchange (ETDEWEB)

    Ferryman, T.A.; Amidan, B.G.; Chen, G. [and others

    1998-09-01

    The exact physical and chemical nature of 55 million gallons of radioactive waste held in 177 underground waste tanks at the Hanford Site is not known in sufficient detail to support safety, retrieval, and immobilization missions. The Hanford Engineering Analysis Best-Basis team has made point estimates of the inventories in each tank. The purpose of this study is to estimate probability distributions for each of the analytes and tanks for which the Hanford Best-Basis team has made point estimates. Uncertainty intervals can then be calculated for the Best-Basis inventories and should facilitate the cleanup missions. The methodology used to generate the results published in the Tank Characterization Database (TCD) and summarized in this paper is based on scientific principles, sound technical knowledge of the realities associated with the Hanford waste tanks, the chemical analysis of actual samples from the tanks, the Hanford Best-Basic research, and historical data records. The methodology builds on research conducted by Pacific Northwest National Laboratory (PNNL) over the last few years. Appendix A of this report summarizes the results of the study. The full set of results (in percentiles, 1--99) is available through the TCD, (http://twins.pnl.gov:8001).

  20. Global-mean marine δ13C and its uncertainty in a glacial state estimate

    Science.gov (United States)

    Gebbie, Geoffrey; Peterson, Carlye D.; Lisiecki, Lorraine E.; Spero, Howard J.

    2015-10-01

    A paleo-data compilation with 492 δ13C and δ18O observations provides the opportunity to better sample the Last Glacial Maximum (LGM) and infer its global properties, such as the mean δ13C of dissolved inorganic carbon. Here, the paleo-compilation is used to reconstruct a steady-state water-mass distribution for the LGM, that in turn is used to map the data onto a 3D global grid. A global-mean marine δ13C value and a self-consistent uncertainty estimate are derived using the framework of state estimation (i.e., combining a numerical model and observations). The LGM global-mean δ13C is estimated to be 0.14‰ ± 0.20‰ at the two standard error level, giving a glacial-to-modern change of 0.32‰ ± 0.20‰. The magnitude of the error bar is attributed to the uncertain glacial ocean circulation and the lack of observational constraints in the Pacific, Indian, and Southern Oceans. To halve the error bar, roughly four times more observations are needed, although strategic sampling may reduce this number. If dynamical constraints can be used to better characterize the LGM circulation, the error bar can also be reduced to 0.05 to 0.1‰, emphasizing that knowledge of the circulation is vital to accurately map δ13C in three dimensions.

  1. Spatial distribution of soil heavy metal pollution estimated by different interpolation methods: accuracy and uncertainty analysis.

    Science.gov (United States)

    Xie, Yunfeng; Chen, Tong-bin; Lei, Mei; Yang, Jun; Guo, Qing-jun; Song, Bo; Zhou, Xiao-yong

    2011-01-01

    Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas.

  2. Estimating riverine nutrient concentrations in agricultural catchments - Do we reduce uncertainty by using local scale data?

    Science.gov (United States)

    Capell, Rene; Hankin, Barry; Strömqvist, Johan; Lamb, Rob; Arheimer, Berit

    2017-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrate, phosphate) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. on major river basin to continental scales can fill a gap where monitoring data is not available. However, both phosphate and nitrate transport are highly complex processes, and nutrient models must balance data requirements and process simplification. Data typically become increasingly sparse and less detailed with increasing spatial scale. Here, we compare model estimates of riverine nitrate concentrations in the Weaver-Dane basin (UK) and to evaluate the role of available environmental data sources for model performance by using (a) open environmental data sources available at European scale and (b) closed data sources which are more localised and typically not openly available. In particular, we aim to evaluate, how model structure, spatial model resolution, climate forcing products, and land use and management information impact on model-estimated nitrate concentrations. We use the European rainfall-runoff and nutrient model E-HYPE (http://hypeweb.smhi.se/europehype/about/) as a baseline large-scale model built on open data sources, and compare with more detailed model set-ups in different configurations using local data. Nitrate estimates are compared using a GLUE uncertainty framework.

  3. Correlation-agnostic fusion for improved uncertainty estimation in multi-view geo-location from UAVs

    Science.gov (United States)

    Taylor, Clark N.; Sundlie, Paul O.

    2017-05-01

    When geo-locating ground objects from a UAV, multiple views of the same object can lead to improved geo- location accuracy. Of equal importance to the location estimate, however, is the uncertainty estimate associated with that location. Standard methods for estimating uncertainty from multiple views generally assume that each view represents an independent measurement of the geo-location. Unfortunately, this assumption is often violated due to correlation between the location estimates. This correlation may occur due to the measurements coming from the same platform, meaning that the error in attitude or location may be correlated across time; or it may be due to external sources (such as GPS) having the same error in multiple aircraft. In either case, the geo-location estimates are not truly independent, leading to optimistic estimates of the geo-location uncertainty. For distributed data fusion applications, correlation-agnostic fusion methods have been developed that can fuse data together regardless of how much correlation may be present between the two estimates. While the results are generally not as impressive as when correlation is perfectly known and taken into account, the fused uncertainty results are guaranteed to be conservative and an improvement on operating without fusion. In this paper, we apply a selection of these correlation-agnostic fusion techniques to the multi-view geo-location problem and analyze their effects on geo-location and predicted uncertainty accuracy. We find that significant benefits can be found from applying these correlation agnostic fusion effects, but that they vary greatly in how well they estimate their own uncertainty.

  4. The pseudo-Thellier palaeointensity method: new calibration and uncertainty estimates

    Science.gov (United States)

    Paterson, Greig A.; Heslop, David; Pan, Yongxin

    2016-12-01

    Non-heating palaeointensity methods are a vital tool to explore magnetic field strength variations recorded by thermally sensitive materials of both terrestrial and extraterrestrial origin. One such method is the calibrated pseudo-Thellier method in which a specimen's natural remanent magnetization is alternating field demagnetized and replaced with a laboratory induced anhysteretic remanent magnetization (as an analogue of a thermoremanent magnetization, TRM). Using a set of 56 volcanic specimens given laboratory TRMs in fields of 10-130 μT, we refine the calibration of the pseudo-Thellier method and better define the uncertainty associated with its palaeointensity estimates. Our new calibration, obtained from 32 selected specimens, resolves the issue of non-zero intercept, which is theoretically predicted, but not satisfied by any previous calibration. The range of individual specimen calibration factors, however, is relatively large, but consistent with the variability expected for SD magnetite. We explore a number of rock magnetic parameters in an attempt to identify selection thresholds for reducing the calibration scatter, but fail to find a suitable choice. We infer that our careful selection process, which incorporates more statistics then previous studies, may be largely screening out any strong rock magnetic dependence. Some subtle grain size or mineralogical dependencies, however, remain after selection, but cannot be discerned from the scatter expected for grain size variability of SD magnetite. As a consequence of the variability in the calibration factor, the uncertainty associated with pseudo-Thellier results is much larger than previously indicated. The scatter of the calibration is ˜25 per cent of the mean value, which implies that, when combined with the scatter of results typically obtained from a single site, the uncertainty of averaged pseudo-Thellier results will always be >25 per cent. As such, pseudo-Thellier results should be

  5. Uncertainties in Early Stage Capital Cost Estimation of Process Design – A case study on biorefinery design

    Directory of Open Access Journals (Sweden)

    Gurkan eSin

    2015-02-01

    Full Text Available Capital investment, next to the product demand, sales and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early stage design is a challenging task. This is especially important in biorefinery research, where available information and experiences with new technologies is limited. A systematic methodology for uncertainty analysis of cost data is proposed that employs (a Bootstrapping as a regression method when cost data is available and (b the Monte Carlo technique as an error propagation method based on expert input when cost data is not available. Four well-known models for early stage cost estimation are reviewed an analyzed using the methodology. The significance of uncertainties of cost data for early stage process design is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable robust decision making under uncertainties. One of the results using an order-of-magnitude estimate shows that the production of diethyl ether and 1,3-butadiene are the most promising with economic risks of 0.24 MM$/a and 4.6 MM$/a due to uncertainties in cost estimations, respectively.

  6. Estimation of the measurement uncertainty by the bottom-up approach for the determination of methamphetamine and amphetamine in urine.

    Science.gov (United States)

    Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck

    2010-05-01

    The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.

  7. The Effect of Uncertainty in Exposure Estimation on the Exposure-Response Relation between 1,3-Butadiene and Leukemia

    Directory of Open Access Journals (Sweden)

    George Maldonado

    2009-09-01

    Full Text Available Abstract: In a follow-up study of mortality among North American synthetic rubber industry workers, cumulative exposure to 1,3-butadiene was positively associated with leukemia. Problems with historical exposure estimation, however, may have distorted the association. To evaluate the impact of potential inaccuracies in exposure estimation, we conducted uncertainty analyses of the relation between cumulative exposure to butadiene and leukemia. We created the 1,000 sets of butadiene estimates using job-exposure matrices consisting of exposure values that corresponded to randomly selected percentiles of the approximate probability distribution of plant-, work area/job group-, and year specific butadiene ppm. We then analyzed the relation between cumulative exposure to butadiene and leukemia for each of the 1,000 sets of butadiene estimates. In the uncertainty analysis, the point estimate of the RR for the first non zero exposure category (>0–<37.5 ppm-years was most likely to be about 1.5. The rate ratio for the second exposure category (37.5–<184.7 ppm-years was most likely to range from 1.5 to 1.8. The RR for category 3 of exposure (184.7–<425.0 ppm-years was most likely between 2.1 and 3.0. The RR for the highest exposure category (425.0+ ppm-years was likely to be between 2.9 and 3.7. This range off RR point estimates can best be interpreted as a probability distribution that describes our uncertainty in RR point estimates due to uncertainty in exposure estimation. After considering the complete probability distributions of butadiene exposure estimates, the exposure-response association of butadiene and leukemia was maintained. This exercise was a unique example of how uncertainty analyses can be used to investigate and support an observed measure of effect when occupational exposure estimates are employed in the absence of direct exposure measurements.

  8. Estimating Soil Organic Carbon stocks and uncertainties for the National inventory Report - a study case in Southern Belgium

    Science.gov (United States)

    Chartin, Caroline; Stevens, Antoine; Kruger, Inken; Esther, Goidts; Carnol, Monique; van Wesemael, Bas

    2016-04-01

    As many other countries, Belgium complies with Annex I of the United Nations Framework Convention on Climate Change (UNFCCC). Belgium thus reports its annual greenhouse gas emissions in its national inventory report (NIR), with a distinction between emissions/sequestration in cropland and grassland (EU decision 529/2013). The CO2 fluxes are then based on changes in SOC stocks computed for each of these two types of landuse. These stocks are specified for each of the agricultural regions which correspond to areas with similar agricultural practices (rotations and/or livestock) and yield potentials. For Southern Belgium (Wallonia) consisting of ten agricultural regions, the Soil Monitoring Network (SMN) 'CARBOSOL' has been developed this last decade to survey the state of agricultural soils by quantifying SOC stocks and their evolution in a reasonable number of locations complying with the time and funds allocated. Unfortunately, the 592 points of the CARBOSOL network do not allow a representative and a sound estimation of SOC stocks and its uncertainties for the 20 possible combinations of land use/agricultural regions. Moreover, the SMN CARBIOSOL is based on a legacy database following a convenience scheme sampling strategy rather than a statistical scheme defined by design-based or model-based strategies. Here, we aim to both quantify SOC budgets (i.e., How much?) and spatialize SOC stocks (i.e., Where?) at regional scale (Southern Belgium) based on data from the SMN described above. To this end, we developed a computation procedure based on Digital Soil Mapping techniques and stochastic simulations (Monte-Carlo) allowing the estimation of multiple (10,000) independent spatialized datasets. This procedure accounts for the uncertainties associated to estimations of both i) SOC stock at the pixelscale and ii) parameters of the models. Based on these 10,000 individual realizations of the spatial model, mean SOC stocks and confidence intervals can be then computed at

  9. Applicability of the MCNP-ACAB system to inventory prediction in high-burnup fuels: sensitivity/uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Herranz, N.; Cabellos, O. [Madrid Polytechnic Univ., Dept. of Nuclear Engineering (Spain); Cabellos, O.; Sanz, J. [Madrid Polytechnic Univ., 2 Instituto de Fusion Nuclear (Spain); Sanz, J. [Univ. Nacional Educacion a Distancia, Dept. of Power Engineering, Madrid (Spain)

    2005-07-01

    We present a new code system which combines the Monte Carlo neutron transport code MCNP-4C and the inventory code ACAB as a suitable tool for high burnup calculations. Our main goal is to show that the system, by means of ACAB capabilities, enables us to assess the impact of neutron cross section uncertainties on the inventory and other inventory-related responses in high burnup applications. The potential impact of nuclear data uncertainties on some response parameters may be large, but only very few codes exist which can treat this effect. In fact, some of the most reported effective code systems in dealing with high burnup problems, such as CASMO-4, MCODE and MONTEBURNS, lack this capability. As first step, the potential of our system, ruling out the uncertainty capability, has been compared with that of those code systems, using a well referenced high burnup pin-cell benchmark exercise. It is proved that the inclusion of ACAB in the system allows to obtain results at least as reliable as those obtained using other inventory codes, such as ORIGEN2. Later on, the uncertainty analysis methodology implemented in ACAB, including both the sensitivity-uncertainty method and the uncertainty analysis by the Monte Carlo technique, is applied to this benchmark problem. We estimate the errors due to activation cross section uncertainties in the prediction of the isotopic content up to the high-burnup spent fuel regime. The most relevant uncertainties are remarked, and some of the most contributing cross sections to those uncertainties are identified. For instance, the most critical reaction for Am{sup 242m} is Am{sup 241}(n,{gamma}-m). At 100 MWd/kg, the cross-section uncertainty of this reaction induces an error of 6.63% on the Am{sup 242m} concentration.The uncertainties in the inventory of fission products reach up to 30%.

  10. Defining the hundred year flood: A Bayesian approach for using historic data to reduce uncertainty in flood frequency estimates

    Science.gov (United States)

    Parkes, Brandon; Demeritt, David

    2016-09-01

    This paper describes a Bayesian statistical model for estimating flood frequency by combining uncertain annual maximum (AMAX) data from a river gauge with estimates of flood peak discharge from various historic sources that predate the period of instrument records. Such historic flood records promise to expand the time series data needed for reducing the uncertainty in return period estimates for extreme events, but the heterogeneity and uncertainty of historic records make them difficult to use alongside Flood Estimation Handbook and other standard methods for generating flood frequency curves from gauge data. Using the flow of the River Eden in Carlisle, Cumbria, UK as a case study, this paper develops a Bayesian model for combining historic flood estimates since 1800 with gauge data since 1967 to estimate the probability of low frequency flood events for the area taking account of uncertainty in the discharge estimates. Results show a reduction in 95% confidence intervals of roughly 50% for annual exceedance probabilities of less than 0.0133 (return periods over 75 years) compared to standard flood frequency estimation methods using solely systematic data. Sensitivity analysis shows the model is sensitive to 2 model parameters both of which are concerned with the historic (pre-systematic) period of the time series. This highlights the importance of adequate consideration of historic channel and floodplain changes or possible bias in estimates of historic flood discharges. The next steps required to roll out this Bayesian approach for operational flood frequency estimation at other sites is also discussed.

  11. Implicit Treatment of Technical Specification and Thermal Hydraulic Parameter Uncertainties in Gaussian Process Model to Estimate Safety Margin

    Directory of Open Access Journals (Sweden)

    Douglas A. Fynan

    2016-06-01

    Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.

  12. Implicit treatment of technical specification and thermal hydraulic parameter uncertainties in Gaussian process model to estimate safety margin

    Energy Technology Data Exchange (ETDEWEB)

    Fynan, Douglas A.; Ahn, Kwang Il [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    The Gaussian process model (GPM) is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU) and Level 1 probabilistic safety assessment (PSA) success criteria definitions while dealing with a large number of uncertainties.

  13. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    Science.gov (United States)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper

  14. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, andParameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  15. New method and uncertainty estimation for plate dimensions and surface measurements

    Science.gov (United States)

    Ali, Salah H. R.; Buajarern, Jariya

    2014-03-01

    Dimensional and surface quality for tile plate manufacturing control is facing difficult engineering challenges. One of these challenges being that plates in large-scale mass production contain geometrically uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007 and TIS 2398-2:2008. A new measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of the proposed method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results of centre of curvature (CC), centre of edge (CE), warpage (W) and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results showed not only a more accurate method but also improved the quality of tile plate products significantly.

  16. New Measurement Method and Uncertainty Estimation for Plate Dimensions and Surface Quality

    Directory of Open Access Journals (Sweden)

    Salah H. R. Ali

    2013-01-01

    Full Text Available Dimensional and surface quality for plate production control is facing difficult engineering challenges. One of these challenges is that plates in large-scale mass production contain geometric uneven surfaces. There is a traditional measurement method used to assess the tile plate dimensions and surface quality based on standard specifications: ISO-10545-2: 1995, EOS-3168-2: 2007, and TIS 2398-2: 2008. A proposed measurement method of the dimensions and surface quality for ceramic oblong large-scale tile plate has been developed compared to the traditional method. The strategy of new method is based on CMM straightness measurement strategy instead of the centre point in the traditional method. Expanded uncertainties budgets in the measurements of each method have been estimated in detail. The capability of accurate estimations of real actual results for centre of curvature (CC, centre of edge (CE, warpage (W, and edge crack defects parameters has been achieved according to standards. Moreover, the obtained results not only showed better accurate new method but also improved the quality of plate products significantly.

  17. Morphological divergence rate tests for natural selection: uncertainty of parameter estimation and robustness of results

    Directory of Open Access Journals (Sweden)

    Leandro R. Monteiro

    2005-01-01

    Full Text Available In this study, we used a combination of geometric morphometric and evolutionary genetics methods for the inference of possible mechanisms of evolutionary divergence. A sensitivity analysis for the constant-heritability rate test results regarding variation in genetic and demographic parameters was performed, in order to assess the relative influence of uncertainty of parameter estimation on the robustness of test results. As an application, we present a study on body shape variation among populations of the poeciliine fish Poecilia vivipara inhabiting lagoons of the quaternary plains in northern Rio de Janeiro State, Brazil. The sensitivity analysis showed that, in general, the most important parameters are heritability, effective population size and number of generations since divergence. For this specific example, using a conservatively wide range of parameters, the neutral model of genetic drift could not be accepted as a sole cause for the observed magnitude of morphological divergence among populations. A mechanism of directional selection is suggested as the main cause of variation among populations in different habitats and lagoons. The implications of parameter estimation and biological assumptions and consequences are discussed.

  18. WE-B-19A-01: SRT II: Uncertainties in SRT

    Energy Technology Data Exchange (ETDEWEB)

    Dieterich, S [UC Davis Medical Center, Sacramento, CA (United States); Schlesinger, D [University of Virginia Health Systems, Charlottesville, VA (United States); Geneser, S [University of California San Francisco, San Francisco, CA (United States)

    2014-06-15

    SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring

  19. New gridded daily climatology of Finland: Permutation-based uncertainty estimates and temporal trends in climate

    Science.gov (United States)

    Aalto, Juha; Pirinen, Pentti; Jylhä, Kirsti

    2016-04-01

    Long-term time series of key climate variables with a relevant spatiotemporal resolution are essential for environmental science. Moreover, such spatially continuous data, based on weather observations, are commonly used in, e.g., downscaling and bias correcting of climate model simulations. Here we conducted a comprehensive spatial interpolation scheme where seven climate variables (daily mean, maximum, and minimum surface air temperatures, daily precipitation sum, relative humidity, sea level air pressure, and snow depth) were interpolated over Finland at the spatial resolution of 10 × 10 km2. More precisely, (1) we produced daily gridded time series (FMI_ClimGrid) of the variables covering the period of 1961-2010, with a special focus on evaluation and permutation-based uncertainty estimates, and (2) we investigated temporal trends in the climate variables based on the gridded data. National climate station observations were supplemented by records from the surrounding countries, and kriging interpolation was applied to account for topography and water bodies. For daily precipitation sum and snow depth, a two-stage interpolation with a binary classifier was deployed for an accurate delineation of areas with no precipitation or snow. A robust cross-validation indicated a good agreement between the observed and interpolated values especially for the temperature variables and air pressure, although the effect of seasons was evident. Permutation-based analysis suggested increased uncertainty toward northern areas, thus identifying regions with suboptimal station density. Finally, several variables had a statistically significant trend indicating a clear but locally varying signal of climate change during the last five decades.

  20. Uncertainty in the Himalayan energy-water nexus: estimating regional exposure to glacial lake outburst floods

    Science.gov (United States)

    Schwanghart, Wolfgang; Worni, Raphael; Huggel, Christian; Stoffel, Markus; Korup, Oliver

    2016-07-01

    Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia’s soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.

  1. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    Energy Technology Data Exchange (ETDEWEB)

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  2. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty.

    Science.gov (United States)

    Schiavazzi, Daniele E; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L

    2017-03-01

    Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Models of Wake-Vortex Spreading Mechanisms and Their Estimated Uncertainties

    Science.gov (United States)

    Rossow, Vernon J.; Hardy, Gordon H.; Meyn, Larry A.

    2006-01-01

    One of the primary constraints on the capacity of the nation's air transportation system is the landing capacity at its busiest airports. Many airports with nearly-simultaneous operations on closely-spaced parallel runways (i.e., as close as 750 ft (246m)) suffer a severe decrease in runway acceptance rate when weather conditions do not allow full utilization. The objective of a research program at NASA Ames Research Center is to develop the technologies needed for traffic management in the airport environment so that operations now allowed on closely-spaced parallel runways under Visual Meteorological Conditions can also be carried out under Instrument Meteorological Conditions. As part of this overall research objective, the study reported here has developed improved models for the various aerodynamic mechanisms that spread and transport wake vortices. The purpose of the study is to continue the development of relationships that increase the accuracy of estimates for the along-trail separation distances available before the vortex wake of a leading aircraft intrudes into the airspace of a following aircraft. Details of the models used and their uncertainties are presented in the appendices to the paper. Suggestions are made as to the theoretical and experimental research needed to increase the accuracy of and confidence level in the models presented and instrumentation required or more precise estimates of the motion and spread of vortex wakes. The improved wake models indicate that, if the following aircraft is upwind of the leading aircraft, the vortex wakes of the leading aircraft will not intrude into the airspace of the following aircraft for about 7s (based on pessimistic assumptions) for most atmospheric conditions. The wake-spreading models also indicate that longer time intervals before wake intrusion are available when atmospheric turbulence levels are mild or moderate. However, if the estimates for those time intervals are to be reliable, further study

  4. [Estimation of soil carbon sequestration potential in typical steppe of Inner Mongolia and associated uncertainty].

    Science.gov (United States)

    Wang, Wei; Wu, Jian-Guo; Han, Xing-Guo

    2012-01-01

    Based on the measurements in the enclosure and uncontrolled grazing plots in the typical steppe of Xilinguole, Inner Mongolia, this paper studied the soil carbon storage and carbon sequestration in the grasslands dominated by Leymus chinensis, Stipa grandis, and Stipa krylovii, respectively, and estimated the regional scale soil carbon sequestration potential in the heavily degraded grassland after restoration. At local scale, the annual soil carbon sequestration in the three grasslands all decreased with increasing year of enclosure. The soil organic carbon storage was significantly higher in the grasslands dominated by L. chinensis and Stipa grandis than in that dominated by Stipa krylovii, but the latter had much higher soil carbon sequestration potential, because of the greater loss of soil organic carbon during the degradation process due to overgrazing. At regional scale, the soil carbon sequestration potential at the depth of 0-20 cm varied from -0.03 x 10(4) to 3.71 x 10(4) kg C x a(-1), and the total carbon sequestration potential was 12.1 x 10(8) kg C x a(-1). Uncertainty analysis indicated that soil gravel content had less effect on the estimated carbon sequestration potential, but the estimation errors resulted from the spatial interpolation of climate data could be about +/- 4.7 x 10(9) kg C x a(-1). In the future, if the growth season precipitation in this region had an average variation of -3.2 mm x (10 a)(-1), the soil carbon sequestration potential would be de- creased by 1.07 x 10(8) kg C x (10 a)(-1).

  5. Use of ensemble prediction technique to estimate the inherent uncertainty in the simulated chlorophyll-a concentration in coastal ecosystems*

    Science.gov (United States)

    Meszaros, Lorinc; El Serafy, Ghada

    2017-04-01

    Phytoplankton blooms in coastal ecosystems such as the Wadden Sea may cause mortality of mussels and other benthic organisms. Furthermore, the algal primary production is the base of the food web and therefore it greatly influences fisheries and aquacultures. Consequently, accurate phytoplankton concentration prediction offers ecosystem and economic benefits. Numerical ecosystem models are powerful tools to compute water quality variables including the phytoplankton concentration. Nevertheless, their accuracy ultimately depends on the uncertainty stemming from the external forcings which further propagates and complicates by the non-linear ecological processes incorporated in the ecological model. The Wadden Sea is a shallow, dynamically varying ecosystem with high turbidity and therefore the uncertainty in the Suspended Particulate Matter (SPM) concentration field greatly influences the prediction of water quality variables. Considering the high level of uncertainty in the modelling process, it is advised that an uncertainty estimate should be provided together with a single-valued deterministic model output. Through the use of an ensemble prediction system in the Dutch coastal waters the uncertainty in the modelled chlorophyll-a concentration has been estimated. The input ensemble is generated from perturbed model process parameters and external forcings through Latin hypercube sampling with dependence (LHSD). The simulation is carried out using the Delft3D Generic Ecological Model (GEM) with the advance algal speciation module-BLOOM which is sufficiently well validated for primary production simulation in the southern North Sea. The output ensemble is post-processed to obtain the uncertainty estimate and the results are validated against in-situ measurements and Remote Sensing (RS) data. The spatial uncertainty of chlorophyll-a concentration was derived using the produced ensemble spread maps. *This work has received funding from the European Union's Horizon

  6. Exploiting Active Subspaces to Quantify Uncertainty in the Numerical Simulation of the HyShot II Scramjet

    CERN Document Server

    Constantine, Paul; Larsson, Johan; Iaccarino, Gianluca

    2014-01-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with emphasis on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace re-parameterizes the operating conditions from seven well characterized physical parameters to a single derived active variable. This dimension reduction enables otherwise intractable---given the cost of the simulation---computational studies to quantify uncertainty; bootstrapping provides confidence intervals on the studies' results. In particular we (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) compute a global upper and lower bound on the quantity of interest, and (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest. We repeat this analysis for two values of ...

  7. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  8. Pairing in neutron matter: New uncertainty estimates and three-body forces

    CERN Document Server

    Drischler, C; Hebeler, K; Schwenk, A

    2016-01-01

    We present solutions of the BCS gap equation in the channels ${}^1S_0$ and ${}^3P_2-{}^3F_2$ in neutron matter based on nuclear interactions derived within chiral effective field theory (EFT). Our studies are based on a representative set of nonlocal nucleon-nucleon (NN) plus three-nucleon (3N) interactions up to next-to-next-to-next-to-leading order (N$^3$LO) as well as local and semilocal chiral NN interactions up to N$^2$LO and N$^4$LO, respectively. In particular, we investigate for the first time the impact of subleading 3N forces at N$^3$LO on pairing gaps and also derive uncertainty estimates by taking into account results for pairing gaps at different orders in the chiral expansion. Finally, we discuss different methods for obtaining self-consistent solutions of the gap equation. Besides the widely-used quasi-linear method by Khodel et al. we demonstrate that the modified Broyden method is well applicable and exhibits a robust convergence behavior. In contrast to Khodel's method it is based on a direc...

  9. Biases and Uncertainties in Physical Parameter Estimates of Lyman Break Galaxies from Broad-band Photometry

    CERN Document Server

    Lee, Seong-Kook; Ferguson, Henry C; Somerville, Rachel S; Wiklind, Tommy; Giavalisco, Mauro

    2008-01-01

    We investigate the biases and uncertainties in estimates of physical parameters of high-redshift Lyman break galaxies (LBGs), such as stellar mass, mean stellar population age, and star formation rate (SFR), obtained from broad-band photometry. By combining LCDM hierarchical structure formation theory, semi-analytic treatments of baryonic physics, and stellar population synthesis models, we construct model galaxy catalogs from which we select LBGs at redshifts z ~ 3.4, 4.0, and 5.0. The broad-band spectral energy distributions (SEDs) of these model LBGs are then analysed by fitting galaxy template SEDs derived from stellar population synthesis models with smoothly declining SFRs. We compare the statistical properties of LBGs' physical parameters -- such as stellar mass, SFR, and stellar population age -- as derived from the best-fit galaxy templates with the intrinsic values from the semi-analytic model. We find some trends in these distributions: first, when the redshift is known, SED-fitting methods reprodu...

  10. Multiscale error analysis, correction, and predictive uncertainty estimation in a flood forecasting system

    Science.gov (United States)

    Bogner, K.; Pappenberger, F.

    2011-07-01

    River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.

  11. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  12. Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation

    KAUST Repository

    Tan, Xiaosi

    2014-08-05

    Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.

  13. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  14. Estimation of uncertainties in resonance parameters of {sup 56}Fe, {sup 239}Pu, {sup 240}Pu and {sup 238}U

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo; Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-05-01

    Uncertainties have been estimated for the resonance parameters of {sup 56}Fe, {sup 239}Pu, {sup 240}Pu and {sup 238}U contained in JENDL-3.2. Errors of the parameters were determined from the measurements which the evaluation was based on. The estimated errors have been compiled in the MF32 of the ENDF format. The numerical results are given in tables. (author)

  15. Uncertainty in age-specific harvest estimates and consequences for white-tailed deer management

    Science.gov (United States)

    Collier, B.A.; Krementz, D.G.

    2007-01-01

    Age structure proportions (proportion of harvested individuals within each age class) are commonly used as support for regulatory restrictions and input for deer population models. Such use requires critical evaluation when harvest regulations force hunters to selectively harvest specific age classes, due to impact on the underlying population age structure. We used a stochastic population simulation model to evaluate the impact of using harvest proportions to evaluate changes in population age structure under a selective harvest management program at two scales. Using harvest proportions to parameterize the age-specific harvest segment of the model for the local scale showed that predictions of post-harvest age structure did not vary dependent upon whether selective harvest criteria were in use or not. At the county scale, yearling frequency in the post-harvest population increased, but model predictions indicated that post-harvest population size of 2.5 years old males would decline below levels found before implementation of the antler restriction, reducing the number of individuals recruited into older age classes. Across the range of age-specific harvest rates modeled, our simulation predicted that underestimation of age-specific harvest rates has considerable influence on predictions of post-harvest population age structure. We found that the consequence of uncertainty in harvest rates corresponds to uncertainty in predictions of residual population structure, and this correspondence is proportional to scale. Our simulations also indicate that regardless of use of harvest proportions or harvest rates, at either the local or county scale the modeled SHC had a high probability (>0.60 and >0.75, respectively) of eliminating recruitment into >2.5 years old age classes. Although frequently used to increase population age structure, our modeling indicated that selective harvest criteria can decrease or eliminate the number of white-tailed deer recruited into older

  16. Monitoring Niger River Floods from satellite Rainfall Estimates : overall skill and rainfall uncertainty propagation.

    Science.gov (United States)

    Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa

    2015-04-01

    Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some

  17. Carbon dioxide and methane measurements from the Los Angeles Megacity Carbon Project - Part 1: calibration, urban enhancements, and uncertainty estimates

    Science.gov (United States)

    Verhulst, Kristal R.; Karion, Anna; Kim, Jooil; Salameh, Peter K.; Keeling, Ralph F.; Newman, Sally; Miller, John; Sloop, Christopher; Pongetti, Thomas; Rao, Preeti; Wong, Clare; Hopkins, Francesca M.; Yadav, Vineet; Weiss, Ray F.; Duren, Riley M.; Miller, Charles E.

    2017-07-01

    We report continuous surface observations of carbon dioxide (CO2) and methane (CH4) from the Los Angeles (LA) Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO) and offshore on San Clemente Island (SCI), one continental site located in Victorville (VIC), in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO) in the San Gabriel Mountains. We find that a local marine background can be established to within ˜ 1 ppm CO2 and ˜ 10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California) site near downtown LA exhibits median hourly enhancements of ˜ 20 ppm CO2 and ˜ 150 ppb CH4 during 2015 as well as ˜ 15 ppm CO2 and ˜ 80 ppb CH4 during mid-afternoon hours (12:00-16:00 LT, local time), which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014). The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much larger than the measurement uncertainty. The background uncertainty for the marine

  18. A method countries can use to estimate changes in carbon stored in harvested wood products and the uncertainty of such estimates

    Science.gov (United States)

    Kenneth E. Skog; Kim Pingoud; James E. Smith

    2004-01-01

    A method is suggested for estimating additions to carbon stored in harvested wood products (HWP) and for evaluating uncertainty. The method uses data on HWP production and trade from several decades and tracks annual additions to pools of HWP in use, removals from use, additions to solid waste disposal sites (SWDS), and decay from SWDS. The method is consistent with...

  19. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  20. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Science.gov (United States)

    Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...

  1. Application of the emission inventory model TEAM: Uncertainties in dioxin emission estimates for central Europe

    NARCIS (Netherlands)

    Pulles, M.P.J.; Kok, H.; Quass, U.

    2006-01-01

    This study uses an improved emission inventory model to assess the uncertainties in emissions of dioxins and furans associated with both knowledge on the exact technologies and processes used, and with the uncertainties of both activity data and emission factors. The annual total emissions for the y

  2. Implementation of unscented transform to estimate the uncertainty of a liquid flow standard system

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Sejong; Choi, Hae-Man; Yoon, Byung-Ro; Kang, Woong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2017-03-15

    First-order partial derivatives of a mathematical model are an essential part of evaluating the measurement uncertainty of a liquid flow standard system according to the Guide to the expression of uncertainty in measurement (GUM). Although the GUM provides a straightforward method to evaluate the measurement uncertainty of volume flow rate, the first-order partial derivatives can be complicated. The mathematical model of volume flow rate in a liquid flow standard system has a cross-correlation between liquid density and buoyancy correction factor. This cross-correlation can make derivation of the first-order partial derivatives difficult. Monte Carlo simulation can be used as an alternative method to circumvent the difficulty in partial derivation. However, the Monte Carlo simulation requires large computational resources for a correct simulation because it considers the completeness issue whether an ideal or a real operator conducts an experiment to evaluate the measurement uncertainty. Thus, the Monte Carlo simulation needs a large number of samples to ensure that the uncertainty evaluation is as close to the GUM as possible. Unscented transform can alleviate this problem because unscented transform can be regarded as a Monte Carlo simulation with an infinite number of samples. This idea means that unscented transform considers the uncertainty evaluation with respect to the ideal operator. Thus, unscented transform can evaluate the measurement uncertainty the same as the uncertainty that the GUM provides.

  3. Conversion factor and uncertainty estimation for quantification of towed gamma-ray detector measurements in Tohoku coastal waters

    Science.gov (United States)

    Ohnishi, S.; Thornton, B.; Kamada, S.; Hirao, Y.; Ura, T.; Odano, N.

    2016-05-01

    Factors to convert the count rate of a NaI(Tl) scintillation detector to the concentration of radioactive cesium in marine sediments are estimated for a towed gamma-ray detector system. The response of the detector against a unit concentration of radioactive cesium is calculated by Monte Carlo radiation transport simulation considering the vertical profile of radioactive material measured in core samples. The conversion factors are acquired by integrating the contribution of each layer and are normalized by the concentration in the surface sediment layer. At the same time, the uncertainty of the conversion factors are formulated and estimated. The combined standard uncertainty of the radioactive cesium concentration by the towed gamma-ray detector is around 25 percent. The values of uncertainty, often referred to as relative root mean squat errors in other works, between sediment core sampling measurements and towed detector measurements were 16 percent in the investigation made near the Abukuma River mouth and 5.2 percent in Sendai Bay, respectively. Most of the uncertainty is due to interpolation of the conversion factors between core samples and uncertainty of the detector's burial depth. The results of the towed measurements agree well with laboratory analysed sediment samples. Also, the concentrations of radioactive cesium at the intersection of each survey line are consistent. The consistency with sampling results and between different lines' transects demonstrate the availability and reproducibility of towed gamma-ray detector system.

  4. Exploring the uncertainty associated with satellite-based estimates of premature mortality due to exposure to fine particulate matter

    Directory of Open Access Journals (Sweden)

    B. Ford

    2015-09-01

    Full Text Available The negative impacts of fine particulate matter (PM2.5 exposure on human health are a primary motivator for air quality research. However, estimates of the air pollution health burden vary considerably and strongly depend on the datasets and methodology. Satellite observations of aerosol optical depth (AOD have been widely used to overcome limited coverage from surface monitoring and to assess the global population exposure to PM2.5 and the associated premature mortality. Here we quantify the uncertainty in determining the burden of disease using this approach, discuss different methods and datasets, and explain sources of discrepancies among values in the literature. For this purpose we primarily use the MODIS satellite observations in concert with the GEOS-Chem chemical transport model. We contrast results in the United States and China for the years 2004–2011. We estimate that in the United States, exposure to PM2.5 accounts for approximately 4 % of total deaths compared to 22 % in China (using satellite-based exposure, which falls within the range of previous estimates. The difference in estimated mortality burden based solely on a global model vs. that derived from satellite is approximately 9 % for the US and 4 % for China on a nationwide basis, although regionally the differences can be much greater. This difference is overshadowed by the uncertainty in the methodology for deriving PM2.5 burden from satellite observations, which we quantify to be on order of 20 % due to uncertainties in the AOD-to-surface-PM2.5 relationship, 10 % due to the satellite observational uncertainty, and 30 % or greater uncertainty associated with the application of concentration response functions to estimated exposure.

  5. Exploring the uncertainty associated with satellite-based estimates of premature mortality due to exposure to fine particulate matter

    Science.gov (United States)

    Ford, Bonne; Heald, Colette L.

    2016-03-01

    The negative impacts of fine particulate matter (PM2.5) exposure on human health are a primary motivator for air quality research. However, estimates of the air pollution health burden vary considerably and strongly depend on the data sets and methodology. Satellite observations of aerosol optical depth (AOD) have been widely used to overcome limited coverage from surface monitoring and to assess the global population exposure to PM2.5 and the associated premature mortality. Here we quantify the uncertainty in determining the burden of disease using this approach, discuss different methods and data sets, and explain sources of discrepancies among values in the literature. For this purpose we primarily use the MODIS satellite observations in concert with the GEOS-Chem chemical transport model. We contrast results in the United States and China for the years 2004-2011. Using the Burnett et al. (2014) integrated exposure response function, we estimate that in the United States, exposure to PM2.5 accounts for approximately 2 % of total deaths compared to 14 % in China (using satellite-based exposure), which falls within the range of previous estimates. The difference in estimated mortality burden based solely on a global model vs. that derived from satellite is approximately 14 % for the US and 2 % for China on a nationwide basis, although regionally the differences can be much greater. This difference is overshadowed by the uncertainty in the methodology for deriving PM2.5 burden from satellite observations, which we quantify to be on the order of 20 % due to uncertainties in the AOD-to-surface-PM2.5 relationship, 10 % due to the satellite observational uncertainty, and 30 % or greater uncertainty associated with the application of concentration response functions to estimated exposure.

  6. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies.

    Science.gov (United States)

    Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O

    2015-02-21

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  7. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    Energy Technology Data Exchange (ETDEWEB)

    Eça, L. [Instituto Superior Técnico, Department of Mechanical Engineering, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Hoekstra, M. [Maritime Research Institute Netherlands, PO Box 28 6700 AA, Wageningen (Netherlands)

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  8. Uncertainties in early-stage capital cost estimation of process design – a case study on biorefinery design

    DEFF Research Database (Denmark)

    Cheali, Peam; Gernaey, Krist; Sin, Gürkan

    2015-01-01

    is highlighted using the synthesis and design of a biorefinery as a case study. The impact of uncertainties in cost estimation on the identification of optimal processing paths is indeed found to be profound. To tackle this challenge, a comprehensive techno-economic risk analysis framework is presented to enable......Capital investment, next to the product demand, sales, and production costs, is one of the key metrics commonly used for project evaluation and feasibility assessment. Estimating the investment costs of a new product/process alternative during early-stage design is a challenging task, which......) the Monte Carlo technique as an error propagation method based on expert input when cost data are not available. Four well-known models for early-stage cost estimation are reviewed and analyzed using the methodology. The significance of uncertainties of cost data for early-stage process design...

  9. Uncertainty in global groundwater storage estimates in a Total Groundwater Stress framework

    Science.gov (United States)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is a finite resource under continuous external pressures. Current unsustainable groundwater use threatens the resilience of aquifer systems and their ability to provide a long‐term water source. Groundwater storage is considered to be a factor of groundwater resilience, although the extent to which resilience can be maintained has yet to be explored in depth. In this study, we assess the limit of groundwater resilience in the world's largest groundwater systems with remote sensing observations. The Total Groundwater Stress (TGS) ratio, defined as the ratio of total storage to the groundwater depletion rate, is used to explore the timescales to depletion in the world's largest aquifer systems and associated groundwater buffer capacity. We find that the current state of knowledge of large‐scale groundwater storage has uncertainty ranges across orders of magnitude that severely limit the characterization of resilience in the study aquifers. Additionally, we show that groundwater availability, traditionally defined as recharge and redefined in this study as total storage, can alter the systems that are considered to be stressed versus unstressed. We find that remote sensing observations from NASA's Gravity Recovery and Climate Experiment can assist in providing such information at the scale of a whole aquifer. For example, we demonstrate that a groundwater depletion rate in the Northwest Sahara Aquifer System of 2.69 ± 0.8 km3/yr would result in the aquifer being depleted to 90% of its total storage in as few as 50 years given an initial storage estimate of 70 km3. PMID:26900184

  10. A methodology for estimating the uncertainty in model parameters applying the robust Bayesian inferences

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Joo Yeon; Lee, Seung Hyun; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)

    2016-06-15

    Any real application of Bayesian inference must acknowledge that both prior distribution and likelihood function have only been specified as more or less convenient approximations to whatever the analyzer's true belief might be. If the inferences from the Bayesian analysis are to be trusted, it is important to determine that they are robust to such variations of prior and likelihood as might also be consistent with the analyzer's stated beliefs. The robust Bayesian inference was applied to atmospheric dispersion assessment using Gaussian plume model. The scopes of contaminations were specified as the uncertainties of distribution type and parametric variability. The probabilistic distribution of model parameters was assumed to be contaminated as the symmetric unimodal and unimodal distributions. The distribution of the sector-averaged relative concentrations was then calculated by applying the contaminated priors to the model parameters. The sector-averaged concentrations for stability class were compared by applying the symmetric unimodal and unimodal priors, respectively, as the contaminated one based on the class of ε-contamination. Though ε was assumed as 10%, the medians reflecting the symmetric unimodal priors were nearly approximated within 10% compared with ones reflecting the plausible ones. However, the medians reflecting the unimodal priors were approximated within 20% for a few downwind distances compared with ones reflecting the plausible ones. The robustness has been answered by estimating how the results of the Bayesian inferences are robust to reasonable variations of the plausible priors. From these robust inferences, it is reasonable to apply the symmetric unimodal priors for analyzing the robustness of the Bayesian inferences.

  11. Geophysical flows under location uncertainty, Part II: Quasi-geostrophy and efficient ensemble spreading

    CERN Document Server

    Resseguier, Valentin; Chapron, Bertrand

    2016-01-01

    Models under location uncertainty are derived assuming that a component of the velocity is uncorrelated in time. The material derivative is accordingly modified to include an advection correction, inhomogeneous and anisotropic diffusion terms and a multiplicative noise contribution. In this paper, simplified geophysical dynamics are derived from a Boussinesq model under location uncertainty. Invoking usual scaling approximations and a moderate influence of the subgrid terms, stochastic formulations are obtained for the stratified Quasi-Geostrophy (QG) and the Surface Quasi-Geostrophy (SQG) models. Based on numerical simulations, benefits of the proposed stochastic formalism are demonstrated. A single realization of models under location uncertainty can restore small-scale structures. An ensemble of realizations further helps to assess model error prediction and outperforms perturbed deterministic models by one order of magnitude. Such a high uncertainty quantification skill is of primary interests for assimil...

  12. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    Science.gov (United States)

    Franco, Guillermo; Shen-Tu, BingMing; Goretti, Agostino; Bazzurro, Paolo; Valensise, Gianluca

    2008-07-01

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  13. ATLAS Calorimeter Response to Single Isolated Hadrons and Estimation of the Calorimeter Jet Scale Uncertainty

    CERN Document Server

    The ATLAS collaboration

    2011-01-01

    The ATLAS calorimeter response to single isolated hadrons is measured using an integrated luminosity of approximately 866~$mu b^{-1}$ of proton-proton collisions at a center-of-mass energy of $\\sqrt{s} = 7$ TeV collected during 2010 by the ATLAS experiment. The calorimeter jet energy scale uncertainty is also addressed, propagating the response uncertainty of single charged and neutral particles to jets. The calorimeter uncertainty is 2--5\\% on central isolated hadrons and 1--3\\% on the final calorimeter jet energy scale.

  14. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali

    2013-05-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.

  15. Methodology for uncertainty calculation of net total cooling effect estimation for rating room air conditioners and packaged terminal air conditioners

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca Diaz, Nestor [Universidad Tecnologica de Pereira, Facultad de Ingenieria Mecanica, Pereira (Colombia); University of Liege, Campus du Sart Tilman, Bat: B49, P33, B-4000 Liege (Belgium)

    2009-09-15

    This article presents the general procedure for uncertainty calculation of net total cooling effect estimation for rating room air conditioners and packaged terminal air conditioners, by means of measurements carried out in a test bench specially designed for this purpose. The uncertainty analysis presented in this work looks for establishing a confidence degree or certainty of experimental results. It is particularly important considering that international standards related to this type of analysis are too ambiguous when treating this subject. The uncertainty analysis is on the other hand an indispensable requirement to international standard ISO 17025 [ISO, 2005. International Standard. 17025. General Requirement to Test and Calibration Laboratories Competences. International Organization for Standardization, Geneva.], which must be applied to obtain the required quality levels according to the Word Trade Organization WTO. (author)

  16. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    Science.gov (United States)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter

  17. Application of the Monte Carlo Method for the Estimation of Uncertainty in Radiofrequency Field Spot Measurements

    Science.gov (United States)

    Iakovidis, S.; Apostolidis, C.; Samaras, T.

    2015-04-01

    The objective of the present work is the application of the Monte Carlo method (GUMS1) for evaluating uncertainty in electromagnetic field measurements and the comparison of the results with the ones obtained using the 'standard' method (GUM). In particular, the two methods are applied in order to evaluate the field measurement uncertainty using a frequency selective radiation meter and the Total Exposure Quotient (TEQ) uncertainty. Comparative results are presented in order to highlight cases where GUMS1 results deviate significantly from the ones obtained using GUM, such as the presence of a non-linear mathematical model connecting the inputs with the output quantity (case of the TEQ model) or the presence of a dominant nonnormal distribution of an input quantity (case of U-shaped mismatch uncertainty). The deviation of the results obtained from the two methods can even lead to different decisions regarding the conformance with the exposure reference levels.

  18. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Data.gov (United States)

    National Aeronautics and Space Administration — This article discusses several aspects of uncertainty represen- tation and management for model-based prognostics method- ologies based on our experience with Kalman...

  19. Uncertainty evaluation of mass discharge estimates from a contaminated site using a fully Bayesian framework

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, W.; Tuxen, N.

    2010-01-01

    for each of the conceptual models considered. The probability distribution of mass discharge is obtained by combining all ensembles via BMA. The method was applied to a trichloroethylene-contaminated site located in northern Copenhagen. Four essentially different conceptual models based on two source zone......, it is important to quantify the associated uncertainties. Here a rigorous approach for quantifying the uncertainty in the mass discharge across a multilevel control plane is presented. The method accounts for (1) conceptual model uncertainty using multiple conceptual models and Bayesian model averaging (BMA), (2......) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. Through unconditional and conditional Monte Carlo simulation, ensembles of steady state plume realizations are generated. The conditional ensembles honor all measured data at the control plane...

  20. Effects of Cracking Test Conditions on Estimation Uncertainty for Weibull Parameters Considering Time-Dependent Censoring Interval

    Directory of Open Access Journals (Sweden)

    Jae Phil Park

    2016-12-01

    Full Text Available It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.

  1. Comparison of the GUM and Monte Carlo methods on the flatness uncertainty estimation in coordinate measuring machine

    Directory of Open Access Journals (Sweden)

    Jalid Abdelilah

    2016-01-01

    Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.

  2. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    Science.gov (United States)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  3. Scale-dependent estimates of the growth of forecast uncertainties in a global prediction system

    Science.gov (United States)

    Žagar, Nedjeljka; Horvat, Martin; Zaplotnik, Žiga; Magnusson, Linus

    2017-04-01

    The representation of the growth of forecast errors by simple parametric models has a long tradition in numerical weather prediction (NWP). A well-known three-parameter model introduced by A. Dalcher and E. Kalnay in 1987 describes the error growth rate proportional to the amount by which the errors fall short of saturation. This standard model has traditionally been applied to estimate the root mean square errors of the geopotential height at 500 hPa level in extratropics. The two model parameters, the so-called α and β terms, have been used to discuss the chaotic error growth and the growth due to model deficiencies. Geopotential height field at 500 hPa is dominated by large-scale features and quasi-geostrophic balance which is well analysed by data assimilation schemes. Small scales which tend to grow at a faster rate than the larger scales of motions, have little variance at 500 hPa. It is thus interesting to provide a picture of the forecast errors growth as a function of scale from the initial uncertainties simulated by the operational ensemble prediction systems. We conducted such study to assess the scale-dependent growth of forecast errors based on a 50-member global forecast ensemble of the European Centre for Medium Range Weather Forecasts. Simulated forecast errors are fitted by a new parametric model with an analytical solution given by a combination of hyperbolic tangent functions. The new fit does not involve computation of the time derivatives of empirical data and it proves robust to reliably model the error growth across many scales. The results quantify a scale-dependent increase of the period of a slow exponential growth. The asymptotic errors in each scale are computed from the model constants. According to the new fit, the range of useful prediction skill, estimated as a scale when the growth of simulated forecast errors reaches 60% of their asymptotic values is around one week in large scales and 2-3 days at 1000 km scale. These estimates

  4. Uncertainty reduction and parameter estimation of a distributed hydrological model with ground and remote-sensing data

    Science.gov (United States)

    Silvestro, F.; Gabellani, S.; Rudari, R.; Delogu, F.; Laiolo, P.; Boni, G.

    2015-04-01

    During the last decade the opportunity and usefulness of using remote-sensing data in hydrology, hydrometeorology and geomorphology has become even more evident and clear. Satellite-based products often allow for the advantage of observing hydrologic variables in a distributed way, offering a different view with respect to traditional observations that can help with understanding and modeling the hydrological cycle. Moreover, remote-sensing data are fundamental in scarce data environments. The use of satellite-derived digital elevation models (DEMs), which are now globally available at 30 m resolution (e.g., from Shuttle Radar Topographic Mission, SRTM), have become standard practice in hydrologic model implementation, but other types of satellite-derived data are still underutilized. As a consequence there is the need for developing and testing techniques that allow the opportunities given by remote-sensing data to be exploited, parameterizing hydrological models and improving their calibration. In this work, Meteosat Second Generation land-surface temperature (LST) estimates and surface soil moisture (SSM), available from European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) H-SAF, are used together with streamflow observations (S. N.) to calibrate the Continuum hydrological model that computes such state variables in a prognostic mode. The first part of the work aims at proving that satellite observations can be exploited to reduce uncertainties in parameter calibration by reducing the parameter equifinality that can become an issue in forecast mode. In the second part, four parameter estimation strategies are implemented and tested in a comparative mode: (i) a multi-objective approach that includes both satellite and ground observations which is an attempt to use different sources of data to add constraints to the parameters; (ii and iii) two approaches solely based on remotely sensed data that reproduce the case of a scarce data

  5. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    Science.gov (United States)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in

  6. On the Reliability of Optimization Results for Trigeneration Systems in Buildings, in the Presence of Price Uncertainties and Erroneous Load Estimation

    Directory of Open Access Journals (Sweden)

    Antonio Piacentino

    2016-12-01

    Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.

  7. Fukushima Daiichi Unit 1 Uncertainty Analysis-Exploration of Core Melt Progression Uncertain Parameters-Volume II.

    Energy Technology Data Exchange (ETDEWEB)

    Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on key figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .

  8. Optimal Parameter and Uncertainty Estimation of a Land Surface Model: Sensitivity to Parameter Ranges and Model Complexities

    Institute of Scientific and Technical Information of China (English)

    Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  9. Metamodel for Efficient Estimation of Capacity-Fade Uncertainty in Li-Ion Batteries for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jaewook Lee

    2015-06-01

    Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.

  10. Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    2011-01-01

    plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...... Monte Carlo simulation. By use of the Kalman Ensemble Generator, these realizations are conditioned on site-specific data. Hereby a posterior ensemble of realizations, all honouring the measured data at the control plane, is generated for each of the conceptual models considered. The ensembles from...

  11. Resolving galaxies in time and space: II: Uncertainties in the spectral synthesis of datacubes

    CERN Document Server

    Fernandes, R Cid; Benito, R Garcia; Perez, E; de Amorim, A L; Sanchez, S F; Husemann, B; Barroso, J Falcon; Lopez-Fernandez, R; Sanchez-Blazquez, P; Asari, N Vale; Vazdekis, A; Walcher, C J; Mast, D

    2013-01-01

    In a companion paper we have presented many products derived from the application of the spectral synthesis code STARLIGHT to datacubes from the CALIFA survey, including 2D maps of stellar population properties and 1D averages in the temporal and spatial dimensions. Here we evaluate the uncertainties in these products. Uncertainties due to noise and spectral shape calibration errors and to the synthesis method are investigated by means of a suite of simulations based on 1638 CALIFA spectra for NGC 2916, with perturbations amplitudes gauged in terms of the expected errors. A separate study was conducted to assess uncertainties related to the choice of evolutionary synthesis models. We compare results obtained with the Bruzual & Charlot models, a preliminary update of them, and a combination of spectra derived from the Granada and MILES models. About 100k CALIFA spectra are used in this comparison. Noise and shape-related errors at the level expected for CALIFA propagate to 0.10-0.15 dex uncertainties in st...

  12. Estimation and Uncertainty Analysis of Flammability Properties for Computer-aided molecular design of working fluids for thermodynamic cycles

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    assessment of novel working fluids relies on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. For novel working fluid candidates experimental values are not available for the safety...... analysis. In this case property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible [1]. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence...

  13. Screening-level estimates of mass discharge uncertainty from point measurement methods

    Science.gov (United States)

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  14. Impact of uncertainties in discharge determination on the parameter estimation and performance of a hydrological model

    NARCIS (Netherlands)

    Tillaart, van den S.P.M.; Booij, M.J.; Krol, M.S

    2013-01-01

    Uncertainties in discharge determination may have serious consequences for hydrological modelling and resulting discharge predictions used for flood forecasting, climate change impact assessment and reservoir operation. The aim of this study is to quantify the effect of discharge errors on parameter

  15. Random Forests as a tool for estimating uncertainty at pixel-level in SAR image classification

    DEFF Research Database (Denmark)

    Loosvelt, Lien; Peters, Jan; Skriver, Henning

    2012-01-01

    , we introduce Random Forests for the probabilistic mapping of vegetation from high-dimensional remote sensing data and present a comprehensive methodology to assess and analyze classification uncertainty based on the local probabilities of class membership. We apply this method to SAR image data...... be easily assessed when using the Random Forests algorithm....

  16. Estimation of the measurement uncertainty in quantitative determination of ketamine and norketamine in urine using a one-point calibration method.

    Science.gov (United States)

    Ma, Yi-Chun; Wang, Che-Wei; Hung, Sih-Hua; Chang, Yan-Zin; Liu, Chia-Reiy; Her, Guor-Rong

    2012-09-01

    An approach was proposed for the estimation of measurement uncertainty for analytical methods based on one-point calibration. The proposed approach is similar to the popular multiple-point calibration approach. However, the standard deviation of calibration was estimated externally. The approach was applied to the estimation of measurement uncertainty for the quantitative determination of ketamine (K) and norketamine (NK) at a 100 ng/mL threshold concentration in urine. In addition to uncertainty due to calibration, sample analysis was the other major source of uncertainty. To include the variation due to matrix effect and temporal effect in sample analysis, different blank urines were spiked with K and NK and analyzed at equal time intervals within and between batches. The expanded uncertainties (k = 2) were estimated to be 10 and 8 ng/mL for K and NK, respectively.

  17. Optimized Clustering Estimators for BAO Measurements Accounting for Significant Redshift Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Ashley J. [Portsmouth U., ICG; Banik, Nilanjan [Fermilab; Avila, Santiago [Madrid, IFT; Percival, Will J. [Portsmouth U., ICG; Dodelson, Scott [Fermilab; Garcia-Bellido, Juan [Madrid, IFT; Crocce, Martin [ICE, Bellaterra; Elvin-Poole, Jack [Jodrell Bank; Giannantonio, Tommaso [Cambridge U., KICC; Manera, Marc [Cambridge U., DAMTP; Sevilla-Noarbe, Ignacio [Madrid, CIEMAT

    2017-05-15

    We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the BAO information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line-of-sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertainty $\\sigma_z \\geq 0.02(1+z)$ we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for $\\sigma_z \\geq 0.02(1+z)$. For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations of galaxy simulations mimicking the Dark Energy Survey Year 1 sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.

  18. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  19. Estimation of Environment-Related Properties of Chemicals for Design of Sustainable Processes: Development of Group-Contribution(+) (GC(+)) Property Models and Uncertainty Analysis

    OpenAIRE

    Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M.; Sin, Gürkan; Gani, Rafiqul

    2012-01-01

    The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of pro...

  20. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    Science.gov (United States)

    Pidlisecky, A.; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  1. Use of Atmospheric Budget to Reduce Uncertainty in Estimated Water Availability over South Asia from Different Reanalyses.

    Science.gov (United States)

    Sebastian, Dawn Emil; Pathak, Amey; Ghosh, Subimal

    2016-07-08

    Disagreements across different reanalyses over South Asia result into uncertainty in assessment of water availability, which is computed as the difference between Precipitation and Evapotranspiration (P-E). Here, we compute P-E directly from atmospheric budget with divergence of moisture flux for different reanalyses and find improved correlation with observed values of P-E, acquired from station and satellite data. We also find reduced closure terms for water cycle computed with atmospheric budget, analysed over South Asian landmass, when compared to that obtained with individual values of P and E. The P-E value derived with atmospheric budget is more consistent with energy budget, when we use top-of-atmosphere radiation for the same. For analysing water cycle, we use runoff from Global Land Data Assimilation System, and water storage from Gravity Recovery and Climate Experiment. We find improvements in agreements across different reanalyses, in terms of inter-annual cross correlation when atmospheric budget is used to estimate P-E and hence, emphasize to use the same for estimations of water availability in South Asia to reduce uncertainty. Our results on water availability with reduced uncertainty over highly populated monsoon driven South Asia will be useful for water management and agricultural decision making.

  2. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data.

    Science.gov (United States)

    Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S

    2015-07-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling

  3. Model-based estimation of the global carbon budget and its uncertainty from carbon dioxide and carbon isotope records

    Energy Technology Data Exchange (ETDEWEB)

    Kheshgi, Haroon S. [Corporate Research Laboratories, Exxon Research and Engineering Company, Annandale, New Jersey (United States); Jain, Atul K. [Department of Atmospheric Sciences, University of Illinois, Urbana (United States); Wuebbles, Donald J. [Department of Atmospheric Sciences, University of Illinois, Urbana (United States)

    1999-12-27

    A global carbon cycle model is used to reconstruct the carbon budget, balancing emissions from fossil fuel and land use with carbon uptake by the oceans, and the terrestrial biosphere. We apply Bayesian statistics to estimate uncertainty of carbon uptake by the oceans and the terrestrial biosphere based on carbon dioxide and carbon isotope records, and prior information on model parameter probability distributions. This results in a quantitative reconstruction of past carbon budget and its uncertainty derived from an explicit choice of model, data-based constraints, and prior distribution of parameters. Our estimated ocean sink for the 1980s is 17{+-}7 Gt C (90% confidence interval) and is comparable to the estimate of 20{+-}8 Gt C given in the recent Intergovernmental Panel on Climate Change assessment [Schimel et al., 1996]. Constraint choice is tested to determine which records have the most influence over estimates of the past carbon budget; records individually (e.g., bomb-radiocarbon inventory) have little effect since there are other records which form similar constraints. (c) 1999 American Geophysical Union.

  4. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    Science.gov (United States)

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do.

  5. Cost benchmarking of railway projects in Europe – dealing with uncertainties in cost estimates

    DEFF Research Database (Denmark)

    Trabo, Inara

    transport infrastructure projects, 9 projects out of 10 came out with budget overruns. As an example of cost overruns is the High Speed 1 in UK, the railway line between London and the British end of the Channel Tunnel. The project was delayed for 11 months and final construction costs were escalated to 80......Past experiences in the construction of high-speed railway projects demontrate either positive or negative financial outcomes of the actual project’s budget. Usually some uncertainty value is included into initial budget calculations. Uncertainty is related to the increase of material prices......, Italian projects have productive experiences in constructing and operating high-speed railway lines. The case study for this research is the first Danish high-speed railway line “The New Line Copenhagen-Ringsted”. The project’s aim is to avoid cost overruns and even make lower the final budget outcomes...

  6. Predictive Uncertainty Estimation on a Precipitation and Temperature Reanalysis Ensemble for Shigar Basin, Central Karakoram

    Directory of Open Access Journals (Sweden)

    Paolo Reggiani

    2016-06-01

    Full Text Available The Upper Indus Basin (UIB and the Karakoram Range are the subject of ongoing hydro-glaciological studies to investigate possible glacier mass balance shifts due to climatic change. Because of the high altitude and remote location, the Karakoram Range is difficult to access and, therefore, remains scarcely monitored. In situ precipitation and temperature measurements are only available at valley locations. High-altitude observations exist only for very limited periods. Gridded precipitation and temperature data generated from the spatial interpolation of in situ observations are unreliable for this region because of the extreme topography. Besides satellite measurements, which offer spatial coverage, but underestimate precipitation in this area, atmospheric reanalyses remain one of the few alternatives. Here, we apply a proven approach to quantify the uncertainty associated with an ensemble of monthly precipitation and temperature reanalysis data for 1979–2009 in Shigar Basin, Central Karakoram. A Model-Conditional Processor (MCP of uncertainty is calibrated on precipitation and temperature in situ data measured in the proximity of the study region. An ensemble of independent reanalyses is processed to determine the predictive uncertainty of monthly observations. As to be expected, the informative gain achieved by post-processing temperature reanalyses is considerable, whereas significantly less gain is achieved for precipitation post-processing. The proposed approach indicates a systematic assessment procedure for predictive uncertainty through probabilistic weighting of multiple re-forecasts, which are bias-corrected on ground observations. The approach also supports an educated reconstruction of gap-filling for missing in situ observations.

  7. Uncertainty in Various Habitat Suitability Models and Its Impact on Habitat Suitability Estimates for Fish

    Directory of Open Access Journals (Sweden)

    Yu-Pin Lin

    2015-07-01

    Full Text Available Species distribution models (SDMs are extensively used to project habitat suitability of species in stream ecological studies. Owing to complex sources of uncertainty, such models may yield projections with varying degrees of uncertainty. To better understand projected spatial distributions and the variability between habitat suitability projections, this study uses five SDMs that are based on the outputs of a two-dimensional hydraulic model to project the suitability of habitats and to evaluate the degree of variability originating from both differing model types and the split-sample procedure. The habitat suitability index (HSI of each species is based on two stream flow variables, including current velocity (V, water depth (D, as well as the heterogeneity of these flow conditions as quantified by the information entropy of V and D. The six SDM approaches used to project fish abundance, as represented by HSI, included two stochastic models: the generalized linear model (GLM and the generalized additive model (GAM; as well as three machine learning models: the support vector machine (SVM, random forest (RF and the artificial neural network (ANN, and an ensemble model (where the latter is the average of the preceding five models. The target species Sicyopterus japonicas was found to prefer habitats with high current velocities. The relationship between mesohabitat diversity and fish abundance was indicated by the trends in information entropy and weighted usable area (WUA over the study area. This study proposes a method for quantifying habitat suitability, and for assessing the uncertainties in HSI and WUA that are introduced by the various SDMs and samples. This study also demonstrated both the merits of the ensemble modeling approach and the necessity of addressing model uncertainty.

  8. A Carbon Monitoring System Approach to US Coastal Wetland Carbon Fluxes: Progress Towards a Tier II Accounting Method with Uncertainty Quantification

    Science.gov (United States)

    Windham-Myers, L.; Holmquist, J. R.; Bergamaschi, B. A.; Byrd, K. B.; Callaway, J.; Crooks, S.; Drexler, J. Z.; Feagin, R. A.; Ferner, M. C.; Gonneea, M. E.; Kroeger, K. D.; Megonigal, P.; Morris, J. T.; Schile, L. M.; Simard, M.; Sutton-Grier, A.; Takekawa, J.; Troxler, T.; Weller, D.; Woo, I.

    2015-12-01

    Despite their high rates of long-term carbon (C) sequestration when compared to upland ecosystems, coastal C accounting is only recently receiving the attention of policy makers and carbon markets. Assessing accuracy and uncertainty in net C flux estimates requires both direct and derived measurements based on both short and long term dynamics in key drivers, particularly soil accretion rates and soil organic content. We are testing the ability of remote sensing products and national scale datasets to estimate biomass and soil stocks and fluxes over a wide range of spatial and temporal scales. For example, the 2013 Wetlands Supplement to the 2006 IPCC GHG national inventory reporting guidelines requests information on development of Tier I-III reporting, which express increasing levels of detail. We report progress toward development of a Carbon Monitoring System for "blue carbon" that may be useful for IPCC reporting guidelines at Tier II levels. Our project uses a current dataset of publically available and contributed field-based measurements to validate models of changing soil C stocks, across a broad range of U.S. tidal wetland types and landuse conversions. Additionally, development of biomass algorithms for both radar and spectral datasets will be tested and used to determine the "price of precision" of different satellite products. We discuss progress in calculating Tier II estimates focusing on variation introduced by the different input datasets. These include the USFWS National Wetlands Inventory, NOAA Coastal Change Analysis Program, and combinations to calculate tidal wetland area. We also assess the use of different attributes and depths from the USDA-SSURGO database to map soil C density. Finally, we examine the relative benefit of radar, spectral and hybrid approaches to biomass mapping in tidal marshes and mangroves. While the US currently plans to report GHG emissions at a Tier I level, we argue that a Tier II analysis is possible due to national

  9. Uncertainty analysis and validation of the estimation of effective hydraulic properties at the Darcy scale

    Science.gov (United States)

    Mesgouez, A.; Buis, S.; Ruy, S.; Lefeuve-Mesgouez, G.

    2014-05-01

    The determination of the hydraulic properties of heterogeneous soils or porous media remains challenging. In the present study, we focus on determining the effective properties of heterogeneous porous media at the Darcy scale with an analysis of their uncertainties. Preliminary, experimental measurements of the hydraulic properties of each component of the heterogeneous medium are obtained. The properties of the effective medium, representing an equivalent homogeneous material, are determined numerically by simulating a water flow in a three-dimensional representation of the heterogeneous medium, under steady-state scenarios and using its component properties. One of the major aspects of this study is to take into account the uncertainties of these properties in the computation and evaluation of the effective properties. This is done using a bootstrap method. Numerical evaporation experiments are conducted both on the heterogeneous and on the effective homogeneous materials to evaluate the effectiveness of the proposed approach. First, the impact of the uncertainties of the component properties on the simulated water matric potential is found to be high for the heterogeneous material configuration. Second, it is shown that the strategy developed herein leads to a reduction of this impact. Finally, the adequacy between the mean of the simulations for the two configurations confirms the suitability of the homogenization approach, even in the case of dynamic scenarios. Although it is applied to green roof substrates, a two-component media composed of bark compost and pozzolan used in the construction of buildings, the methodology proposed in this study is generic.

  10. Uncertainty estimation in one-dimensional heat transport model for heterogeneous porous medium.

    Science.gov (United States)

    Chang, Ching-Min; Yeh, Hund-Der

    2014-01-01

    In many practical applications, the rates for ground water recharge and discharge are determined based on the analytical solution developed by Bredehoeft and Papadopulos (1965) to the one-dimensional steady-state heat transport equation. Groundwater flow processes are affected by the heterogeneity of subsurface systems; yet, the details of which cannot be anticipated precisely. There exists a great deal of uncertainty (variability) associated with the application of Bredehoeft and Papadopulos' solution (1965) to the field-scale heat transport problems. However, the quantification of uncertainty involved in such application has so far not been addressed, which is the objective of this wok. In addition, the influence of the statistical properties of log hydraulic conductivity field on the variability in temperature field in a heterogeneous aquifer is also investigated. The results of the analysis demonstrate that the variability (or uncertainty) in the temperature field increases with the correlation scale of the log hydraulic conductivity covariance function and the variability of temperature field also depends positively on the position.

  11. Comparisons and Uncertainty in Fat and Adipose Tissue Estimation Techniques: The Northern Elephant Seal as a Case Study.

    Directory of Open Access Journals (Sweden)

    Lisa K Schwarz

    Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may

  12. Monitoring Process Water Quality Using Near Infrared Spectroscopy and Partial Least Squares Regression with Prediction Uncertainty Estimation.

    Science.gov (United States)

    Skou, Peter B; Berg, Thilo A; Aunsbjerg, Stina D; Thaysen, Dorrit; Rasmussen, Morten A; van den Berg, Frans

    2017-03-01

    Reuse of process water in dairy ingredient production-and food processing in general-opens the possibility for sustainable water regimes. Membrane filtration processes are an attractive source of process water recovery since the technology is already utilized in the dairy industry and its use is expected to grow considerably. At Arla Foods Ingredients (AFI), permeate from a reverse osmosis polisher filtration unit is sought to be reused as process water, replacing the intake of potable water. However, as for all dairy and food producers, the process water quality must be monitored continuously to ensure food safety. In the present investigation we found urea to be the main organic compound, which potentially could represent a microbiological risk. Near infrared spectroscopy (NIRS) in combination with multivariate modeling has a long-standing reputation as a real-time measurement technology in quality assurance. Urea was quantified Using NIRS and partial least squares regression (PLS) in the concentration range 50-200 ppm (RMSEP = 12 ppm, R(2 )= 0.88) in laboratory settings with potential for on-line application. A drawback of using NIRS together with PLS is that uncertainty estimates are seldom reported but essential to establishing real-time risk assessment. In a multivariate regression setting, sample-specific prediction errors are needed, which complicates the uncertainty estimation. We give a straightforward strategy for implementing an already developed, but seldom used, method for estimating sample-specific prediction uncertainty. We also suggest an improvement. Comparing independent reference analyses with the sample-specific prediction error estimates showed that the method worked on industrial samples when the model was appropriate and unbiased, and was simple to implement.

  13. Towards uncertainty quantification and parameter estimation for Earth system models in a component-based modeling framework

    Science.gov (United States)

    Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.

    2016-05-01

    Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.

  14. Mathematical modeling of the processes of estimating reserves of iron ore raw materials in conditions of uncertainty

    Directory of Open Access Journals (Sweden)

    N. N. Nekrasova

    2016-01-01

    Full Text Available Summary. This article proposed to estimate the technological parameters of mining and metallurgical industry (iron ore stocks, given the fuzzy set values in conditions of uncertainty using the balance sheet and industrial methods of calculation of reserves of ore. Due to the fact that the modeling of the processes of extraction of ore is associated with parameters of the equations that contain variables with different nature of uncertainty, it is better to provide all the information on a single formal language of fuzzy set theory. Thus, the proposed model calculation and evaluation of reserves of iron ore by different methods in conditions of uncertainty geological information on the basis of the theory of fuzzy sets. In this case the undefined values are interpreted as intentionally "fuzzy", since this approach largely corresponds to the real industrial situation than the interpretation of such quantities in terms of random. Taken into account the fact that the application of the probabilistic approach leads to the identification of uncertainty with randomness, but in practice, the basic nature of uncertainty in the calculation of reserves of iron ore is unclear. Under the proposed approach, each fuzzy parameter is a corresponding membership function, to determine which proposed using a General algorithm, as the result of algebraic operations on arbitrary membership function of the inverse numerical method. Because of the existence of many models describing the same production process in different methods (for example, the balance model or industrial model and under different assumptions proposed to coordinate such models on the basis of the model of aggregation of heterogeneous information. For matching this kind of information, its generalization and adjustment of the outcome parameters, it is expedient to use the apparatus of fuzzy set theory that allows to obtain quantitative characteristics of imprecisely specified parameters and make the

  15. Calibration-induced uncertainty of the EPIC model to estimate climate change impact on global maize yield

    Science.gov (United States)

    Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di

    2016-09-01

    Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.

  16. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    Science.gov (United States)

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  17. Doubling the spectrum of time-domain induced polarization: removal of non-linear self-potential drift, harmonic noise and spikes, tapered gating, and uncertainty estimation

    DEFF Research Database (Denmark)

    Olsson, Per-Ivar; Fiandaca, Gianluca; Larsen, Jakob Juul;

    This paper presents an advanced signal processing scheme for time-domain induced polarization full waveform data. The scheme includes several steps with an improved induced polarization (IP) response gating design using convolution with tapered windows to suppress high frequency noise...... of noise model parameters for each segment, a full harmonic noise model is subtracted. Furthermore, the uncertainty of the background drift removal is estimated which together with the gating uncertainty estimate and a uniform uncertainty gives a total, data-driven, error estimate for each IP gate...

  18. Stability analysis of thermo-acoustic nonlinear eigenproblems in annular combustors. Part II. Uncertainty quantification

    CERN Document Server

    Magri, Luca; Nicoud, Franck; Juniper, Matthew

    2016-01-01

    Monte Carlo and Active Subspace Identification methods are combined with first- and second-order adjoint sensitivities to perform (forward) uncertainty quantification analysis of the thermo-acoustic stability of two annular combustor configurations. This method is applied to evaluate the risk factor, i.e., the probability for the system to be unstable. It is shown that the adjoint approach reduces the number of nonlinear-eigenproblem calculations by up to $\\sim\\mathcal{O}(M)$, as many as the Monte Carlo samples.

  19. Uncertainty analysis of heat flux measurements estimated using a one-dimensional, inverse heat-conduction program.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James Thomas; Figueroa, Victor G.; Murphy, Jill E. (Worcester Polytechnic Institute, Worcester, MA)

    2005-02-01

    The measurement of heat flux in hydrocarbon fuel fires (e.g., diesel or JP-8) is difficult due to high temperatures and the sooty environment. Un-cooled commercially available heat flux gages do not survive in long duration fires, and cooled gages often become covered with soot, thus changing the gage calibration. An alternate method that is rugged and relatively inexpensive is based on inverse heat conduction methods. Inverse heat-conduction methods estimate absorbed heat flux at specific material interfaces using temperature/time histories, boundary conditions, material properties, and usually an assumption of one-dimensional (1-D) heat flow. This method is commonly used at Sandia.s fire test facilities. In this report, an uncertainty analysis was performed for a specific example to quantify the effect of input parameter variations on the estimated heat flux when using the inverse heat conduction method. The approach used was to compare results from a number of cases using modified inputs to a base-case. The response of a 304 stainless-steel cylinder [about 30.5 cm (12-in.) in diameter and 0.32-cm-thick (1/8-in.)] filled with 2.5-cm-thick (1-in.) ceramic fiber insulation was examined. Input parameters of an inverse heat conduction program varied were steel-wall thickness, thermal conductivity, and volumetric heat capacity; insulation thickness, thermal conductivity, and volumetric heat capacity, temperature uncertainty, boundary conditions, temperature sampling period; and numerical inputs. One-dimensional heat transfer was assumed in all cases. Results of the analysis show that, at the maximum heat flux, the most important parameters were temperature uncertainty, steel thickness and steel volumetric heat capacity. The use of a constant thermal properties rather than temperature dependent values also made a significant difference in the resultant heat flux; therefore, temperature-dependent values should be used. As an example, several parameters were varied to

  20. Uncertainty in runoff based on Global Climate Model precipitation and temperature data – Part 2: Estimation and uncertainty of annual runoff and reservoir yield

    Directory of Open Access Journals (Sweden)

    M. C. Peel

    2014-05-01

    Full Text Available Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between Global Climate Models (GCMs and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3 and phase 5 (CMIP5 datasets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to approximate within-GCM uncertainty of monthly precipitation and temperature projections and assess its impact on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. To-date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2014 sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP, temperature (MAT and runoff (MAR, the standard deviation of annual precipitation (SDP and runoff (SDR and reservoir yield for five CMIP3 GCMs at 17 world-wide catchments

  1. TSS concentration in sewers estimated from turbidity measurements by means of linear regression accounting for uncertainties in both variables.

    Science.gov (United States)

    Bertrand-Krajewski, J L

    2004-01-01

    In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.

  2. The Parallel C++ Statistical Library ‘QUESO’: Quantification of Uncertainty for Estimation, Simulation and Optimization

    KAUST Repository

    Prudencio, Ernesto E.

    2012-01-01

    QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.

  3. Determination of Al in cake mix: Method validation and estimation of measurement uncertainty

    Science.gov (United States)

    Andrade, G.; Rocha, O.; Junqueira, R.

    2016-07-01

    An analytical method for the determination of Al in cake mix was developed. Acceptable values were obtained for the following parameters: linearity, detection limit - LOD (5.00 mg-kg-1) quantification limit - LOQ (12.5 mg-kg-1), the recovery assays values (between 91 and 102%), the relative standard deviation under repeatability and within-reproducibility conditions (<20.0%) and measurement uncertainty tests (<10.0%) The results of the validation process showed that the proposed method is fitness for purpose.

  4. Uncertainty in recharge estimation: impact on groundwater vulnerability assessments for the Pearl Harbor Basin, O'ahu, Hawai'i, U.S.A.

    Science.gov (United States)

    Giambelluca, Thomas W.; Loague, Keith; Green, Richard E.; Nullet, Michael A.

    1996-06-01

    In this paper, uncertainty in recharge estimates is investigated relative to its impact on assessments of groundwater contamination vulnerability using a relatively simple pesticide mobility index, attenuation factor (AF). We employ a combination of first-order uncertainty analysis (FOUA) and sensitivity analysis to investigate recharge uncertainties for agricultural land on the island of O'ahu, Hawai'i, that is currently, or has been in the past, under sugarcane or pineapple cultivation. Uncertainty in recharge due to recharge component uncertainties is 49% of the mean for sugarcane and 58% of the mean for pineapple. The components contributing the largest amounts of uncertainty to the recharge estimate are irrigation in the case of sugarcane and precipitation in the case of pineapple. For a suite of pesticides formerly or currently used in the region, the contribution to AF uncertainty of recharge uncertainty was compared with the contributions of other AF components: retardation factor (RF), a measure of the effects of sorption; soil-water content at field capacity (ΘFC); and pesticide half-life (t1/2). Depending upon the pesticide, the contribution of recharge to uncertainty ranks second or third among the four AF components tested. The natural temporal variability of recharge is another source of uncertainty in AF, because the index is calculated using the time-averaged recharge rate. Relative to the mean, recharge variability is 10%, 44%, and 176% for the annual, monthly, and daily time scales, respectively, under sugarcane, and 31%, 112%, and 344%, respectively, under pineapple. In general, uncertainty in AF associated with temporal variability in recharge at all time scales exceeds AF. For chemicals such as atrazine or diuron under sugarcane, and atrazine or bromacil under pineapple, the range of AF uncertainty due to temporal variability in recharge encompasses significantly higher levels of leaching potential at some locations than that indicated by the

  5. Estimate of the theoretical uncertainty of the cross sections for nucleon knockout in neutral-current neutrino-oxygen interactions

    CERN Document Server

    Ankowski, Artur M; Benhar, Omar; Caballero, Juan A; Giusti, Carlotta; González-Jiménez, Raúl; Megias, Guillermo D; Meucci, Andrea

    2015-01-01

    Free nucleons propagating in water are known to produce gamma rays, which form a background to the searches for diffuse supernova neutrinos and sterile neutrinos carried out with Cherenkov detectors. As a consequence, the process of nucleon knockout induced by neutral-current quasielastic interactions of atmospheric (anti)neutrinos with oxygen needs to be under control at the quantitative level in the background simulations of the ongoing and future experiments. In this paper, we provide a quantitative assessment of the uncertainty associated with the theoretical description of the nuclear cross sections, estimating it from the discrepancies between the predictions of different models.

  6. Critical headway estimation under uncertainty and non-ideal communication conditions

    NARCIS (Netherlands)

    Kester, L.J.H.M.; Willigen, W. van; Jongh, J.F.C.M de

    2014-01-01

    This article proposes a safety check extension to Adaptive Cruise Control systems where the critical headway time is estimated in real-time. This critical headway time estimate enables automated reaction to crisis circumstances such as when a preceding vehicle performs an emergency brake. We discuss

  7. Critical headway estimation under uncertainty and non-ideal communication conditions

    NARCIS (Netherlands)

    Kester, L.J.H.M.; Willigen, W. van; Jongh, J.F.C.M de

    2014-01-01

    This article proposes a safety check extension to Adaptive Cruise Control systems where the critical headway time is estimated in real-time. This critical headway time estimate enables automated reaction to crisis circumstances such as when a preceding vehicle performs an emergency brake. We discuss

  8. Degradation and performance evaluation of PV module in desert climate conditions with estimate uncertainty in measuring

    Directory of Open Access Journals (Sweden)

    Fezzani Amor

    2017-01-01

    Full Text Available The performance of photovoltaic (PV module is affected by outdoor conditions. Outdoor testing consists installing a module, and collecting electrical performance data and climatic data over a certain period of time. It can also include the study of long-term performance under real work conditions. Tests are operated in URAER located in desert region of Ghardaïa (Algeria characterized by high irradiation and temperature levels. The degradation of PV module with temperature and time exposure to sunlight contributes significantly to the final output from the module, as the output reduces each year. This paper presents a comparative study of different methods to evaluate the degradation of PV module after a long term exposure of more than 12 years in desert region and calculates uncertainties in measuring. Firstly, this evaluation uses three methods: Visual inspection, data given by Solmetric PVA-600 Analyzer translated at Standard Test Condition (STC and based on the investigation results of the translation equations as ICE 60891. Secondly, the degradation rates calculated for all methods. Finally, a comparison between a degradation rates given by Solmetric PVA-600 analyzer, calculated by simulation model and calculated by two methods (ICE 60891 procedures 1, 2. We achieved a detailed uncertainty study in order to improve the procedure and measurement instrument.

  9. Towards a more harmonized processing of eddy covariance CO2 fluxes: algorithms and uncertainty estimation

    Directory of Open Access Journals (Sweden)

    T. Vesala

    2006-07-01

    Full Text Available Eddy covariance technique to measure CO2, water and energy fluxes between biosphere and atmosphere is widely spread and used in various regional networks. Currently more that 250 eddy covariance sites are active around the world measuring carbon exchange at high temporal resolution for different biomes and climatic conditions. These data are usually acquired using the same method but they need a set of corrections that are often differently applied to each site and in a subjective way. In this paper a new standardized set of corrections are proposed and the uncertainties introduced by these corrections are assessed for 8 different forest sites in Europe with a total of 12 yearly datasets. The uncertainties introduced on the two components GPP (Gross Primary Production and TER (Terrestrial Ecosystem Respiration are also discussed and a quantitative analysis presented . The results show that a standardized data processing is needed for an effective comparison across biomes and for underpinning inter-annual variability. The methodology presented in this paper has also been integrated in the European database of the eddy covariance measurements.

  10. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  11. Uncertainty Estimation for 2D PIV: An In-Depth Comparative Analysis

    Science.gov (United States)

    Boomsma, Aaron; Bhattacharya, Syantan; Troolin, Dan; Vlachos, Pavlos; Pothos, Stamatios

    2016-11-01

    Uncertainty quantification methods have recently made great strides in accurately predicting uncertainties for planar PIV, and several different approaches are now documented. In the present study, we provide an analysis of these methods across different experiments and different PIV processing codes. To assess the performance of said methods, we follow the approach of Sciacchitano et al. (2015) and utilize two PIV measurement systems with overlapping fields of view-one acting as a reference (which is validated using simultaneous LDV measurements) and the other as a measurement system, paying close attention to the effects of interrogation window overlap and bias errors on the analysis. A total of three experiments were performed: a jet flow and a cylinder in cross flow at two Reynolds numbers. In brief, the standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for image matching methods. We present an in-depth survey of both global (e.g., coverage and error histograms) and local (e.g., spatially varying statistics) parameters to examine the strengths and weaknesses of each method in monitor their responses to different regions of the experimental flows.

  12. The Uncertainty of Biomass Estimates from Modeled ICESat-2 Returns Across a Boreal Forest Gradient

    Science.gov (United States)

    Montesano, P. M.; Rosette, J.; Sun, G.; North, P.; Nelson, R. F.; Dubayah, R. O.; Ranson, K. J.; Kharuk, V.

    2014-01-01

    The Forest Light (FLIGHT) radiative transfer model was used to examine the uncertainty of vegetation structure measurements from NASA's planned ICESat-2 photon counting light detection and ranging (LiDAR) instrument across a synthetic Larix forest gradient in the taiga-tundra ecotone. The simulations demonstrate how measurements from the planned spaceborne mission, which differ from those of previous LiDAR systems, may perform across a boreal forest to non-forest structure gradient in globally important ecological region of northern Siberia. We used a modified version of FLIGHT to simulate the acquisition parameters of ICESat-2. Modeled returns were analyzed from collections of sequential footprints along LiDAR tracks (link-scales) of lengths ranging from 20 m-90 m. These link-scales traversed synthetic forest stands that were initialized with parameters drawn from field surveys in Siberian Larix forests. LiDAR returns from vegetation were compiled for 100 simulated LiDAR collections for each 10 Mg · ha(exp -1) interval in the 0-100 Mg · ha(exp -1) above-ground biomass density (AGB) forest gradient. Canopy height metrics were computed and AGB was inferred from empirical models. The root mean square error (RMSE) and RMSE uncertainty associated with the distribution of inferred AGB within each AGB interval across the gradient was examined. Simulation results of the bright daylight and low vegetation reflectivity conditions for collecting photon counting LiDAR with no topographic relief show that 1-2 photons are returned for 79%-88% of LiDAR shots. Signal photons account for approximately 67% of all LiDAR returns, while approximately 50% of shots result in 1 signal photon returned. The proportion of these signal photon returns do not differ significantly (p greater than 0.05) for AGB intervals greater than 20 Mg · ha(exp -1). The 50m link-scale approximates the finest horizontal resolution (length) at which photon counting LiDAR collection provides strong model

  13. The Uncertainty of Biomass Estimates from Modeled ICESat-2 Returns Across a Boreal Forest Gradient

    Science.gov (United States)

    Montesano, P. M.; Rosette, J.; Sun, G.; North, P.; Nelson, R. F.; Dubayah, R. O.; Ranson, K. J.; Kharuk, V.

    2014-01-01

    The Forest Light (FLIGHT) radiative transfer model was used to examine the uncertainty of vegetation structure measurements from NASA's planned ICESat-2 photon counting light detection and ranging (LiDAR) instrument across a synthetic Larix forest gradient in the taiga-tundra ecotone. The simulations demonstrate how measurements from the planned spaceborne mission, which differ from those of previous LiDAR systems, may perform across a boreal forest to non-forest structure gradient in globally important ecological region of northern Siberia. We used a modified version of FLIGHT to simulate the acquisition parameters of ICESat-2. Modeled returns were analyzed from collections of sequential footprints along LiDAR tracks (link-scales) of lengths ranging from 20 m-90 m. These link-scales traversed synthetic forest stands that were initialized with parameters drawn from field surveys in Siberian Larix forests. LiDAR returns from vegetation were compiled for 100 simulated LiDAR collections for each 10 Mg · ha(exp -1) interval in the 0-100 Mg · ha(exp -1) above-ground biomass density (AGB) forest gradient. Canopy height metrics were computed and AGB was inferred from empirical models. The root mean square error (RMSE) and RMSE uncertainty associated with the distribution of inferred AGB within each AGB interval across the gradient was examined. Simulation results of the bright daylight and low vegetation reflectivity conditions for collecting photon counting LiDAR with no topographic relief show that 1-2 photons are returned for 79%-88% of LiDAR shots. Signal photons account for approximately 67% of all LiDAR returns, while approximately 50% of shots result in 1 signal photon returned. The proportion of these signal photon returns do not differ significantly (p greater than 0.05) for AGB intervals greater than 20 Mg · ha(exp -1). The 50m link-scale approximates the finest horizontal resolution (length) at which photon counting LiDAR collection provides strong model

  14. Estimation Uncertainty in the Determinatin of the Master Curve Reference Temperature

    Energy Technology Data Exchange (ETDEWEB)

    TL Sham; DR Eno

    2006-11-15

    The Master Curve Reference Temperature, T{sub 0}, characterizes the fracture performance of structural steels in the ductile-to-brittle transition region. For a given material, this reference temperature is estimated via fracture toughness testing. A methodology is presented to compute the standard error of an estimated T{sub 0} value from a finite sample of toughness data, in a unified manner for both constant temperature and multiple temperature test methods. Using the asymptotic properties of maximum likelihood estimators, closed-form expressions for the standard error of the estimate of T{sub 0} are presented for both test methods. This methodology includes statistically rigorous treatment of censored data, which represents an advance over the current ASTM E1921 methodology. Through Monte Carlo simulations of realistic constant temperature and multiple temperature test plans, the recommended likelihood-based procedure is shown to provide better statistical performance than the methods in the ASTM E1920 standards.

  15. ESTIMATION UNCERTAINTY IN THE DETERMINATION OF THE MASTER CURVE REFERENCE TEMPERATURE

    Energy Technology Data Exchange (ETDEWEB)

    Sham, Sam [ORNL; Eno, Daniel R [Bechtel Marine Propulsion Corporation

    2010-01-01

    The Master Curve Reference Temperature, T0, characterizes the fracture performance of structural steels in the ductile-to-brittle transition region. For a given material, this reference temperature is estimated via fracture toughness testing. A methodology is presented to compute the standard error of an estimated T0 value from a finite sample of toughness data, in a unified manner for both single temperature and multiple temperature test methods. Using the asymptotic properties of maximum likelihood estimators, closed-form expressions for the standard error of the estimate of T0 are presented for both test methods. This methodology includes statistically rigorous treatment of censored data, which represents an advance over the current ASTM E1921 methodology. Through Monte Carlo simulations of realistic single temperature and multiple temperature test plans, the recommended likelihood-based procedure is shown to provide better statistical performance than the methods in the ASTM E1921 standard.

  16. Sensitivity of CO2 migration estimation on reservoir temperature and pressure uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Preston; Doughty, Christine

    2008-11-01

    The density and viscosity of supercritical CO{sub 2} are sensitive to pressure and temperature (PT) while the viscosity of brine is sensitive primarily to temperature. Oil field PT data in the vicinity of WESTCARB's Phase III injection pilot test site in the southern San Joaquin Valley, California, show a range of PT values, indicating either PT uncertainty or variability. Numerical simulation results across the range of likely PT indicate brine viscosity variation causes virtually no difference in plume evolution and final size, but CO{sub 2} density variation causes a large difference. Relative ultimate plume size is almost directly proportional to the relative difference in brine and CO{sub 2} density (buoyancy flow). The majority of the difference in plume size occurs during and shortly after the cessation of injection.

  17. Development and comparison of metrics for evaluating climate models and estimation of projection uncertainty

    Science.gov (United States)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2017-04-01

    The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.

  18. Effect of activation cross-section uncertainties in selecting steels for the HYLIFE-II chamber to successful waste management

    Energy Technology Data Exchange (ETDEWEB)

    Sanz, J. [Universidad Nacional Educacion a Distancia, Dep. Ingenieria Energetica, Juan del Rosal 12, 28040 Madrid (Spain) and Instituto de Fusion Nuclear, Universidad Politecnica de Madrid, Madrid (Spain)]. E-mail: jsanz@ind.uned.es; Cabellos, O. [Instituto de Fusion Nuclear, Universidad Politecnica de Madrid, Madrid (Spain); Reyes, S. [Lawrence Livermore National Laboratory, Livermore, CA (United States)

    2005-11-15

    We perform the waste management assessment of the different types of steels proposed as structural material for the inertial fusion energy (IFE) HYLIFE-II concept. Both recycling options, hands-on (HoR) and remote (RR), are unacceptable. Regarding shallow land burial (SLB), 304SS has a very good performance, and both Cr-W ferritic steels (FS) and oxide-dispersion-strengthened (ODS) FS are very likely to be acceptable. The only two impurity elements that question the possibility of obtaining reduced activation (RA) steels for SLB are niobium and molybdenum. The effect of activation cross-section uncertainties on SLB assessments is proved to be important. The necessary improvement of some tungsten and niobium cross-sections is justified.

  19. A modeling approach to evaluate the uncertainty in estimating the evaporation behaviour and volatility of organic aerosols

    Directory of Open Access Journals (Sweden)

    E. Fuentes

    2012-04-01

    Full Text Available The uncertainty in determining the volatility behaviour of organic particles from thermograms using calibration curves and a kinetic model has been evaluated. In the analysis, factors such as re-condensation, departure from equilibrium and analysis methodology were considered as potential sources of uncertainty in deriving volatility distribution from thermograms obtained with currently used thermodenuder designs.

    The previously found empirical relationship between C* (saturation concentration and T50 (temperature at which 50% of aerosol mass evaporates was theoretically interpreted and tested to infer volatility distributions from experimental thermograms. The presented theoretical analysis shows that this empirical equation is in fact an equilibrium formulation, whose applicability is lessened as measurements deviate from equilibrium. While using a calibration curve between C* and T50 to estimate volatility properties was found to hold at equilibrium, significant underestimation was obtained under kinetically-controlled evaporation conditions. Because thermograms obtained at ambient aerosol loading levels are most likely to show departure from equilibrium, the application of a kinetic evaporation model is more suitable for inferring volatility properties of atmospheric samples than the calibration curve approach; however, the kinetic model analysis implies significant uncertainty, due to its sensitivity to the assumption of "effective" net kinetic evaporation and condensation coefficients. The influence of re-condensation on thermograms from the thermodenuder designs under study was found to be highly dependent on the particular experimental condition, with a significant potential to affect volatility estimations for aerosol mass loadings >50 μg m−3 and with increasing effective kinetic coefficient for condensation and decreasing particle size. These results show that the

  20. Joint use of singular value decomposition and Monte-Carlo simulation for estimating uncertainty in surface NMR inversion

    Science.gov (United States)

    Legchenko, Anatoly; Comte, Jean-Christophe; Ofterdinger, Ulrich; Vouillamoz, Jean-Michel; Lawson, Fabrice Messan Amen; Walsh, John

    2017-09-01

    We propose a simple and robust approach for investigating uncertainty in the results of inversion in geophysics. We apply this approach to inversion of Surface Nuclear Magnetic Resonance (SNMR) data, which is also known as Magnetic Resonance Sounding (MRS). Solution of this inverse problem is known to be non-unique. We inverse MRS data using the well-known Tikhonov regularization method, which provides an optimal solution as a trade-off between the stability and accuracy. Then, we perturb this model by random values and compute the fitting error for the perturbed models. The magnitude of these perturbations is limited by the uncertainty estimated with the singular value decomposition (SVD) and taking into account experimental errors. We use 106 perturbed models and show that the large majority of these models, which have all the water content within the variations given by the SVD estimate, do not fit data with an acceptable accuracy. Thus, we may limit the solution space by only the equivalent inverse models that fit data with the accuracy close to that of the initial inverse model. For representing inversion results, we use three equivalent solutions instead of the only one: the ;best; solution given by the regularization or other inversion technic and the extreme variations of this solution corresponding to the equivalent models with the minimum and the maximum volume of water. For demonstrating our approach, we use synthetic data sets and experimental data acquired in the framework of investigation of a hard rock aquifer in the Ireland (County Donegal).

  1. A two-step combination of top-down and bottom-up fire emission estimates at regional and global scales: strengths and main uncertainties

    Science.gov (United States)

    Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje

    2016-04-01

    Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization

  2. Considering sampling strategy and cross-section complexity for estimating the uncertainty of discharge measurements using the velocity-area method

    Science.gov (United States)

    Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine

    2016-02-01

    Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the

  3. Uncertainties of isoprene emissions in the MEGAN model estimated for a coniferous and broad-leaved mixed forest in Southern China

    Science.gov (United States)

    Situ, Shuping; Wang, Xuemei; Guenther, Alex; Zhang, Yanli; Wang, Xinming; Huang, Minjuan; Fan, Qi; Xiong, Zhe

    2014-12-01

    With local observed emission factor and meteorological data, this study constrained the Model of Emissions of Gases and Aerosols from Nature (MEGAN) v2.1 to estimate isoprene emission from the Dinghushan forest during fall 2008 and quantify the uncertainties associated with MEGAN parameters using Monte Carlo approach. Compared with observation-based isoprene emission data originated from a campaign during this period at this site, the local constrained MEGAN tends to reproduce the diurnal variations and magnitude of isoprene emission reasonably well, with correlation coefficient of 0.7 and mean bias of 47.5%. The results also indicate high uncertainties in isoprene emission estimated, with the relative error varied from -89.0-111.0% at the 95% confidence interval. The key uncertainty sources include emission factors, γTLD, photosynthetically active radiation (PAR) and temperature. This implies that accurate input of emission factor, PAR and temperature is a key approach to reduce uncertainties in isoprene emission estimation.

  4. Estimation of uncertainty of wind energy predictions with application to weather routing and wind power generation

    CERN Document Server

    Zastrau, David

    2017-01-01

    Wind drives in combination with weather routing can lower the fuel consumption of cargo ships significantly. For this reason, the author describes a mathematical method based on quantile regression for a probabilistic estimate of the wind propulsion force on a ship route.

  5. Uncertainty in Climatology-Based Estimates of Shallow Ground Water Recharge

    Science.gov (United States)

    The groundwater recharge (GR) estimates for flow and transport projections are often evaluated as a fixed percentage of average annual precipitation. The chemical transport in variably saturated heterogeneous porous media is not linearly related to the average velocity. The objective of this study w...

  6. Evaluation of estimation methods and base data uncertainties for critical loads of acid deposition in Japan

    NARCIS (Netherlands)

    Shindo, J.; Bregt, A.K.; Hakamata, T.

    1995-01-01

    A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed

  7. Uncertainty in eddy covariance flux estimates resulting from spectral attenuation [Chapter 4

    Science.gov (United States)

    W. J. Massman; R. Clement

    2004-01-01

    Surface exchange fluxes measured by eddy covariance tend to be underestimated as a result of limitations in sensor design, signal processing methods, and finite flux-averaging periods. But, careful system design, modern instrumentation, and appropriate data processing algorithms can minimize these losses, which, if not too large, can be estimated and corrected using...

  8. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    Science.gov (United States)

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  9. Evaluation of estimation methods and base data uncertainties for critical loads of acid deposition in Japan

    NARCIS (Netherlands)

    Shindo, J.; Bregt, A.K.; Hakamata, T.

    1995-01-01

    A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed

  10. Group-Contribution based Property Estimation and Uncertainty analysis for Flammability-related Properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    This study presents new group contribution (GC) models for the prediction of Lower and Upper Flammability Limits (LFL and UFL), Flash Point (FP) and Auto Ignition Temperature (AIT) of organic chemicals applying the Marrero/Gani (MG) method. Advanced methods for parameter estimation using robust...

  11. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    Science.gov (United States)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  12. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Directory of Open Access Journals (Sweden)

    B. Sarangi

    2015-12-01

    Full Text Available In this work, we have used scanning mobility particle sizer (SMPS and quartz crystal microbalance (QCM to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS, ammonium nitrate (AN and sodium chloride (SC, and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA, where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff with uncertainty of inorganic salt particles (for particle count mean diameter (CMD over a size range 10 to 478 nm, i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm−3, which are comparable with the material density (ρ values, 1.77, 2.17 and 1.72 g cm−3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2 in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm−3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an

  13. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Science.gov (United States)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3

  14. Quantifying and reducing uncertainties in estimated soil CO2 fluxes with hierarchical data-model integration

    Science.gov (United States)

    Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise

    2016-12-01

    Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.

  15. Estimating rainforest biomass stocks and carbon loss from deforestation and degradation in Papua New Guinea 1972-2002: Best estimates, uncertainties and research needs.

    Science.gov (United States)

    Bryan, Jane; Shearman, Phil; Ash, Julian; Kirkpatrick, J B

    2010-01-01

    Reduction of carbon emissions from tropical deforestation and forest degradation is being considered a cost-effective way of mitigating the impacts of global warming. If such reductions are to be implemented, accurate and repeatable measurements of forest cover change and biomass will be required. In Papua New Guinea (PNG), which has one of the world's largest remaining areas of tropical forest, we used the best available data to estimate rainforest carbon stocks, and emissions from deforestation and degradation. We collated all available PNG field measurements which could be used to estimate carbon stocks in logged and unlogged forest. We extrapolated these plot-level estimates across the forested landscape using high-resolution forest mapping. We found the best estimate of forest carbon stocks contained in logged and unlogged forest in 2002 to be 4770 Mt (+/-13%). Our best estimate of gross forest carbon released through deforestation and degradation between 1972 and 2002 was 1178 Mt (+/-18%). By applying a long-term forest change model, we estimated that the carbon loss resulting from deforestation and degradation in 2001 was 53 Mt (+/-18%), rising from 24 Mt (+/-15%) in 1972. Forty-one percent of 2001 emissions resulted from logging, rising from 21% in 1972. Reducing emissions from logging is therefore a priority for PNG. The large uncertainty in our estimates of carbon stocks and fluxes is primarily due to the dearth of field measurements in both logged and unlogged forest, and the lack of PNG logging damage studies. Research priorities for PNG to increase the accuracy of forest carbon stock assessments are the collection of field measurements in unlogged forest and more spatially explicit logging damage studies.

  16. Modeling the potential area of occupancy at fine resolution may reduce uncertainty in species range estimates

    DEFF Research Database (Denmark)

    Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo

    2012-01-01

    and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...... Area (MPA). As defined here, the potential AOO provides spatially-explicit measures of species ranges which are permanent in the time and scarcely affected by sampling bias. The overestimation of these measures may be reduced using higher thresholds of habitat suitability, but standard rules as the MPA...

  17. Application of the PSI-NUSS Tool for the Estimation of Nuclear Data Related keff Uncertainties for the OECD/NEA WPNCS UACSA Phase I Benchmark

    Science.gov (United States)

    Zhu, T.; Vasiliev, A.; Ferroukhi, H.; Pautz, A.

    2014-04-01

    At the Paul Scherrer Institute (PSI), a methodology titled PSI-NUSS is under development for the propagation of nuclear data uncertainties into Criticality Safety Evaluation (CSE) with the Monte Carlo code MCNPX. The primary purpose is to provide a complementary option for the uncertainty assessment related to nuclear data, versus the traditional approach which relies on estimating biases/uncertainties based on validation studies against representative critical benchmark experiments. In the present paper, the PSI-NUSS methodology is applied to quantify nuclear data uncertainties for the OECD/NEA UACSA Exercise Phase I benchmark. One underlying reason is that PSI's CSE methodology developed so far and previously applied for this benchmark was based on using a more conventional approach, involving engineering guesses in order to estimate uncertainties in the calculated effective multiplication factor (keff). Therefore, as the PSI-NUSS methodology aims precisely at integrating a more rigorous treatment of the specific type of uncertainties from nuclear data for CSE, its application to the UACSA is conducted here: nuclear data related uncertainty component is estimated and compared to results obtained by other participants using different codes/libraries and methodologies.

  18. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Science.gov (United States)

    2012-09-01

    different decisions as com- pared to an unmanned aerial vehicle (UAV) mission reconfig- uration based on prognostics indication on power train fail- ures...Degradation Modeling Training Trajectories Test Trajectory Parameter Estimation State-space Representation Prognostics Dynamic System Realization Health...and ASME. Kai Goebel received the degree of Diplom-Ingenieur from the Technische University Munchen, Germany in 1990. He received the M.S. and Ph.D

  19. Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign

    Science.gov (United States)

    Aronstein, David L.; Smith, J. Scott

    2016-01-01

    Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescop